All Episodes

June 5, 2025 58 mins

Send us a text

In this electric episode of Sidecar Sync, Amith Nagarajan and Mallory Mejias unpack the latest AI announcements from Google I/O and the highly anticipated Claude 4 release from Anthropic. They explore what Google's Deep Think means for AI reasoning, debate the creative and ethical implications of video generation tools like Veo 3 and Flow, and rave about Claude's new voice mode. Plus, they reflect on the seismic shift AI is bringing to content, coding, and SEO—alongside some AC/DC-fueled Chicago memories and a preview of the upcoming digitalNow conference.

"If you dislike change, you're going to dislike irrelevance even more." - Erik Shinseki
https://shorturl.at/39XvA

🤖 Join the AI Mastermind:  https://sidecar.ai/association-ai-mastermind

💡 Find out more about Sidecar’s CESSE Partnership - https://shorturl.at/LpEYb

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
https://learn.sidecar.ai

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecar.ai/ai

🎉 Thank you to our sponsor https://memberjunction.com/

🛠 AI Tools and Resources Mentioned in This Episode:

Claude 4 Opus ➡ https://www.anthropic.com/news/claude-4-opus
Claude Voice Mode ➡ https://www.anthropic.com/news/claude-voice-mode
Gemini 2.5 Pro ➡ https://deepmind.google/technologies/gemini
Gemini Ultra ➡ https://deepmind.google/technologies/gemini
Gemini Flash ➡ https://deepmind.google/technologies/gemini
Google Veo 3 ➡ https://deepmind.google/technologies/veo
Google Image 4 ➡ https://deepmind.google/technologies/image
Flow by Google ➡ https://deepmind.google/technologies/flow

📅  Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/

🎉 More from Today’s Sponsors:
Member Junction https://memberjunction.com/

🚀 Sidecar on LinkedIn
https://www.linkedin.com/company/sidecar-global/

👍 Like & Subscribe!
https://x.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecar.ai/

Amith Nagarajan is the Chairman of Blue Cypress https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory:
https://linkedin.com/mallorymejias

...
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
The way to think of these AI tools is they're part
of your team and they can helpyou brainstorm.
They can help you come up withnew ideas.
They can help come up withcreative alternatives to
something you're working on.
So give it a shot.
The best way to learn AI is toplay with AI.
Welcome to Sidecar Sync, yourweekly dose of innovation.
If you're looking for thelatest news, insights and

(00:21):
developments in the associationworld, especially those driven
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host

(00:43):
.
Greetings Sidecar Synclisteners and viewers.
Welcome to this episode.
We are here to talk about aninteresting set of topics, as
usual, at the intersection ofall things associations and the
world of AI.
My name is Amit Nagarajan.

Speaker 2 (01:00):
And my name is Mallory Mejiaz.

Speaker 1 (01:02):
And we're your hosts and, as always, we've prepared a
couple of topics for you thatwe think are really interesting
at this intersection of wherethe world of associations is
going and needs to go and, ofcourse, what's happening in the
world of artificial intelligence, which is always moving and
going really, really fast.
It's a crazy time.
How are you doing today,mallory?

Speaker 2 (01:23):
I'm doing pretty well , amit, I think today was a fun
example.
You sent me a Teams messageright before we started
recording, with an announcementabout Clawed Voice that had just
been released maybe what, 15hours ago, 18 hours ago that we
had to hurry up and add intotoday's script.
So I think you're right, we'reseeing AI move faster and faster

(01:43):
.
It can be tough to keep up with, but at least we have this
weekly meeting to kind of holdourselves accountable.

Speaker 1 (01:50):
Well, it reminds me of that old quote.
I don't remember who it's from.
We'll have to find out ifthere's this definitive
attribution of this particularquote and link it in the pod
notes.
But something along the linesof if you don't like change,
you'll like obsolescence less,and so that's certainly an
appropriate thing to rememberwhen we're dealing with the pace
of change.
And for everyone you know, Ifind it overwhelming as well.

(02:11):
I mean, I kind of enjoy it onthe one hand because it's super
fun to.
It's kind of like Christmasevery day where you get these
new toys to play with.
But the flip side of it is it'slike how do you keep up with
these things, you know, andespecially if a big part of your
business is staying on top ofAI and you find out?
Well, we didn't know aboutClaude's audio mode.
How can we be so far behind?
We're 15 hours behind Mallory.
I mean, we're just doing ahorrible, horrible job.

(02:35):
So I won't steal our thunderfrom that, but I am excited
about audio mode.
Speaking of thunder, thisweekend I was up in Chicago and
got to check out ACDC, and so myteenage daughter was with me.
I got to experience ACDC, oneof my favorite all-time bands,

(02:56):
but live I'd never seen them,and so she got to see the band
play, which was pretty fun, andseeing her sing along to
Thunderstruck was quite fun, youknow.

Speaker 2 (03:05):
Wait, that's awesome, I had no idea.
Were you just in Chicago forfun, for business?

Speaker 1 (03:09):
No, we were.
So every year I take each of mykids on a one-on-one trip, so
I'll take them just for like along weekend.
Usually it's like leaving aFriday morning and come back on
a Sunday night, so and I let thekids pick where they're going.
My son always picks goingskiing, so we kind of do the
same thing and just ski reallyhard for three days and come

(03:30):
home and that's always super fun, and then my daughter picks you
know, usually a city.
She's a city person.
So we've been in New York, we'vebeen to Chicago, we've been to
Orlando, we've been to a bunchof places, san Francisco one
time, and we've been to Chicago.
Actually, this is our secondtrip to Chicago that we've done
together.
Actually, this is our secondtrip to Chicago that we've done
together.
The first one we did was eightor nine years ago and she was
really small at the time andwhat was really cool was Chicago

(03:56):
.
Back then they had a Chicagodoes a lot of really cool things
in terms of street art and theyhad at the time, eight or nine
years ago, a bunch of dogstatues which I think were there
to honor the canine policeunits Something along those
lines is what I recall andthey're not generally on display
across the city as they were atthat point in time, you know,
several years ago, and both mykid and I are dog nuts, so
seeing those dog statues allover the place was awesome.

(04:18):
But it happened to be that wewere checking into the Sheraton
Hotel downtown and there was oneof those dog statues right
there like greeting us as we gotout of the cab, you know,
coming into the hotel.
So that was a cool way to startthe trip, and then we got to
see ACDC live on Saturday night.
So it was awesome.
We had a great time.

Speaker 2 (04:35):
That's really awesome .
I love the idea of doing aspecial trip with each of your
kids solo each year.
That's really special.
I remember one year too.
I think y'all went to San Diegoor maybe it was like San
Francisco.
That was a few years ago.
Yeah, we did a.

Speaker 1 (04:47):
California trip and that's where I'm from originally
and we got to go to SanFrancisco and we got to do a
bunch of cool stuff and showedher where I grew up and stuff
which was fun too.
But yeah, I highly recommend itif you have kids and if you can
get away, even if it'ssomewhere you know short driving
distance, just spend a day, twodays.
It's interesting the kind ofstuff, especially in teen years.
I've got a rising sophomore anda rising senior in high school

(05:11):
and you know you don't get awhole ton of time with them
one-on-one, so it's cool to justbreak away and spend a little
bit of solo time with each kid.
And you know, if you have afamily with like six or seven
children, I guess that's a lotharder.
But I've only got two, so it'spretty straightforward.

Speaker 2 (05:24):
It works out well for you.
Yeah, well, I love Chicago.
It's a great city.
If only we knew of anotherevent in Chicago that was coming
up this year that maybe ourlisteners could go to Any ideas,
amit.

Speaker 1 (05:36):
Yeah, you know, and it's pretty awesome.
I think it will compare verywell we're also to ACDC in terms
of how cool it is.
We are going to be hosting yeah, totally, we are going to be
hosting Digital Now in ChicagoNovember 2nd through 5th at the
beautiful Lowe's Hotel.
I was actually next door to theLowe's Hotel when I was there

(05:56):
this weekend, and Lowe's Hotelis spectacular.
We have some really fun eventsin that neighborhood as well
that people can walk to, so it'sgoing to be a great event.
Learning deeply all the thingsthat are happening in the world
of AI and emerging technology ingeneral and how they apply to
the world of associations, as wealways do each year, each year

(06:17):
in the fall, when we host ourDigital Now conferences.
We've been doing for quite anumber of years now.
It's really a touch point inthe year where we can look at
what's happened since the lastDigital Now, reflect, think
deeply about the future andreally build relationships with
a wonderful community.
We expect to have in theneighborhood of about 300 people

(06:37):
there this year and that shouldbe the biggest Digital Now ever
and, most importantly, thequality of the community coming
together, the care that theyhave for advancing their
organizations and helping theircolleagues advance across
organizational boundaries isreally what gives Digital Now
its energy.
It happens to be that we'retalking about a lot of AI topics
.
It's really about the communityof these forward looking

(06:59):
practitioners.
We think it's a pretty uniquecommunity and a unique event, so
definitely encourage folks toconsider putting that on the
calendar.
Chicago, november 2nd through5th.

Speaker 2 (07:09):
Absolutely, and we're starting to announce some
keynote speakers as well.
Amith, you were on the docket.
I'm assuming you'll bedelivering the first keynote
kicking off the event.

Speaker 1 (07:17):
That's usually my habit, is you know?
I'll get up there and starttalking about whatever's on my
mind.
And it's pretty much just likethat.
But yeah, I'll be opening it upand sharing some general
thoughts on the broader arc ofAI and, at that point in time,
who knows what we'll be covering?
Because you know, in allhonesty, I do prepare.
I prepare keynotes well inadvance on the one hand, but I

(07:39):
really am tuning them up untilliterally the night before
usually.
So it's just so hard to preparefor delivering a message on
anything related to AI if you'renot that dynamic with it.
It's not so much that the toolsand technologies are changing
so much which they are but it'smore about how do you get people
on board with the concept ofexponential change, and the more

(08:00):
up-to-date and the morerelevant the examples are, I
find that you can bring anaudience along in a much more
effective way 100%.

Speaker 2 (08:07):
I feel like if you had prepared a keynote for today
, which we're recording this, inlate May, right after Memorial
Day, and prepared that inJanuary, it would be obsolete,
like there would be no point inpresenting that keynote on AI.
So I think you almost have tomaybe have a framework or
structure ahead of time, butkeep tweaking almost up until
what 15 hours before.

Speaker 1 (08:26):
Yeah, totally.

Speaker 2 (08:29):
All right.
Today we've got an excitinglineup of topics.
We're going to be talking aboutGoogle's IO conference, which
they recently held, and some ofthe updates and releases that
came out of that, and then we'llalso be talking about the
Claude 4 release, which I'm verypersonally excited about
because Claude is my favoritemodel.
I'll go ahead and put that outthere.
I've said it a few times, butthat's the one I go to for the

(08:51):
most part.
But first we're starting offwith the Google IO Conference,
which is their annual developerconference, typically held in
May in Mountain View, california.
It's the company's flagshipevent for unveiling the latest
advancements in technology.
Google IO 2025 was held in Mayand, of course, was dominated by
artificial intelligenceadvancements, with a particular

(09:12):
focus on infusing AI into nearlyevery product and service.
So I'm going to include some ofthe important releases and
updates from this year'sconference.
First, focusing on the GeminiAI platform.
Gemini 2.5 Pro, which we'vecovered on the pod, received a
significant upgrade with theintroduction of DeepThink, an
experimental enhanced reasoningmode.

(09:34):
Deepthink allows the model toconsider multiple answers and
reason in parallel.
We've seen something similarcome out of OpenAI and Anthropic
as well.
They also released Gemini Ultra, a new top-tier subscription,
so it's about $250 a month.
It offers the highest level ofaccess to Google's AI tools,

(09:54):
including VO3, flow, deepthinkand Expanded Limits on platforms
like Notebook, lm and Wisk.
And then Gemini 2.5 Flash, thelighter weight model, is now
available to all users via theGemini app.
Moving on to search andworkspace, google's AI mode, a
chat-like search experiencepowered by Gemini, is now

(10:16):
available to all users in the US.
It introduces features likein-depth search chart creation
for financial and sports queries, like in-depth search chart
creation for financial andsports queries, and the ability
to shop directly through AI mode.
Gemini can now generatepersonalized smart replies in
Gmail by drawing on a user'semail history, with this feature
rolling out to payingsubscribers this summer.

(10:36):
And then generative AI is beingintegrated into Google
Workspace apps, maps and Chrome,making these tools more
intelligent and morepersonalized.
Maybe the most exciting partfor me was looking at the
generative AI model releases andupdates.
The latest version of Google'stext-to-image generator, imogen

(10:56):
4, improves text rendering andsupports exporting images in
multiple formats.
I tested this out right beforethe pod.
Images in multiple formats.
I tested this out right beforethe pod.
The speed incredible.
I gave it a really long promptthat I've been using those types
of prompts for ChatGPT's 4.0image generator.
Chatgpt tends to take like 30seconds to a minute to generate

(11:17):
the image.
This generated the images nearinstantaneously, which was
incredible, but the text was notas good as ChatGPT.
It got some of the words right,some of the words not so right.
So I'll say I'm sticking withChatGPT for now, but I wanted to
share the speed Very impressive.
They also released the nextgeneration video generator, vo3.
That produces synchronizedvideo and audio including sound

(11:41):
effects, background noise anddialogue.
I did a quick experiment withVO3.
I shared this with youyesterday, amit.
I want to insert a quick cliphere.
If you saw that on YouTube, yousaw how impressive the details
were in the video.
It's only eight seconds for nowbecause it's still in preview,

(12:01):
but I will say it's prettyimpressive for what I gave it in
a single prompt.
My prompt was essentially actas a member or pretend we're
seeing a member of theAssociation of Really Awesome
Nurses at the annual meetinggiving a testimonial directly to
camera about why being in thatassociation and attending that

(12:22):
event is so important.
So for one prompt, quiteimpressive, and I'm sure you all
have seen perhaps the videomaking rounds on social media.
It's like a car expo hall eventand you almost can't you really
can't tell that it's AIgenerated.
They also released Flow, whichis a new application that uses

(12:44):
VO, imogen and Gemini togenerate short AI videos from
text or image prompts.
It includes scene buildingtools for creating longer
AI-generated films.
Something interesting I had notheard about Project Starline is
Google's 3D video chat booth,and it's evolving into Google
Beam.
So it's an HP-branded devicethat uses a light field display

(13:07):
and six cameras to create a 3Dvideo chat experience targeting
enterprise customers.
Pretty neat.
Maybe we'll have a Google Beamdigital now one day.
We're also seeing new APIs andsample apps like Androidify
showcase how generative AI cantransform user experience and
app development workflows.
We didn't see any new hardwarecome out of Google IO 2025, like

(13:31):
we had in the past.
They did release an importantupdate, though, that they're
beginning to allow largelanguage models to access
personal data for more tailoredexperiences, starting with Gmail
, like I mentioned, andexpanding to other products soon
.
So, amit, a lot, a lot, a lotto unpack here.
Of all of the updates that Ijust covered, what do you think

(13:53):
is most notable or most exciting?

Speaker 1 (13:55):
I think this is an interesting point to look at
Google's overall development arcin the last couple of years,
because you know you covered alot of ground there.
So clearly Google has been,let's just say, very busy.
We talked about Google and, atthe time, their lack of a
contemporary AI offering back inthe early days of ChatGPT.
Chatgpt, we talked about howGoogle had, you know, issued

(14:24):
this internal announcement thatthey had to focus their
resources and it was, you know,essentially an existential
threat to their business, whichit was and is if they weren't in
the race.
But you know, clearly, whatthey've done here over the last
couple of years is not onlycaught up, but I would argue
that they're leading the pack inmany areas of artificial
intelligence, which is kind ofnatural for these guys.
You know Google you have toremember starting with search
actually was a form of AI in avery early style.

(14:47):
But you know, these guys havebeen deep, deep, deep in
computer science and AI for avery long time.
The transformer architecturewhich all of this stuff is based
on was invented in their labyears ago.
So but they, you know theydidn't commercialize it quickly
or as extensively as others did,and so the point I'm making,
before we get into the specifics, is if you find that you are

(15:08):
behind on AI or something elselike that, it doesn't mean that
you're lost.
It means that you need toprioritize.
It means that you need to cutout the things that you don't
need to do and focus on thethings that are really critical
to your future.
And associations are often ledby groupthink and committees and
kind of everybody getting atrophy kind of mindset in a

(15:30):
committee where nobody wants tokill projects that they
personally felt were importantwere important.
But these are the times whenyou have to lead with conviction
and you have to be willing tonarrow your lens and scope down
to just a handful of activitiesthat you're going to go crush it
with, and I think Google hasdone a really good job of that.
So I wanted to give them someprops but also relate that back
to the world of associations,who are often juggling far too

(15:53):
many priorities, and that'swhere you have to make some
tough decisions.
The difficult conversation inthe strategy room isn't how to
add ideas to the mix.
It's how to prune ideas and howto defer ideas and, in some
cases, how to kill ideas off.
So I think Google's done a goodjob in focusing their resources
on AI generally.
Coming back to your question,mallory, I think that what they

(16:14):
talked about with DeepThink isyou know, it's one of like 50
announcements, so let's unpackthat just a tiny bit.
So DeepThink is?
It goes beyond the idea ofextended reasoning in cloud or
the longer form of reasoningthat's in OpenAI's 03 and 04
models, which are kind of singlethreaded in a sense.

(16:34):
So those models, what they had,when you talk about either test
time compute or inference timecompute, you're essentially
giving the model more time tothink and, like our brains,
they'll start exploring,typically one solution at a time
, the next solution and thenthey'll try to find out, well,
what's the best solution.
Let me break down the probleminto chunks and solve it.
We've covered that a lot onthis pod and in our content

(16:56):
elsewhere.
And what DeepThink does that'sinteresting is in parallel
considering multiple possibleanswers and essentially imagine
that reasoning process we justdiscussed but happening three,
five, 10, 50 times in paralleland then pulling back the best
answer based upon that combinedreasoning.
It's like saying, hey, insteadof having one really smart PhD

(17:17):
level person work for you on aproblem, let's spawn a room of
30 such PhDs, or 50 or 100 or1,000, have them all work on it
in parallel and then bring backthe best ideas and combine them.
That has a lot of promise.
Obviously, it's computationallyresource intensive, but I think
it holds a lot of promise.
We ourselves here at BlueCypress not nearly at the level

(17:39):
of what Google does, obviously,but in our own small way are
experimenting with the sameconcept with some of our AI
agents where, rather thansolving for a particular user
problem one piece at a time,we're actually generating
multiple possible responses andthen using a supervisor AI model
to pick the best answer.
So I think that's, and a lot ofpeople are experimenting with
this.
What's happening is is, if youlook at the broader arc in the

(18:00):
compute curve, we have moreresources available and faster
inference available than everbefore.
So it's Flow AI filmmaking app,because I found that, just in
terms of creativity, certainly avery interesting announcement.

Speaker 2 (18:26):
Yeah, so I didn't have a chance to play with Flow
directly, but it's kind of acombination, like I mentioned,
of all the things VO3, geminiwhich I have experimented with.
I will say my first experiencewith this was actually on Reddit
, which you and I have beentalking about just recently, but
I'm an avid Reddit user.
I like to go to that platformif I need recommendations or

(18:48):
advice and like crowdsourceinformation for travel, for
acting classes, for recipes, allsorts of things.
So I'm in an acting thread andsomeone dropped the release of
VO3 and the video that was goingviral and they basically said
this is going to completely ruinlike performing arts and

(19:09):
everything.
It was kind of a doomsday postwhich I immediately clicked on
it and was like ooh, when my twoworlds clash right, my acting
world and then my sidecar syncAI enthusiast world.
So I will say I was a littlebit taken aback initially.
I knew that this was possible,but seeing it in that context,
with all the comments of otheractors saying this is going to
ruin everything, we're nevergoing to be able to act in the

(19:31):
future.
This is going to take over.
I had a little bit of anegative angle on it perhaps,
but it's just two sides of acoin.
I think.
On one side, you're allowing somany people to create things
that never would have beenpossible, including myself
Having an idea, and not just inthe US.
Someone anywhere in the worldhaving an idea, a story that
they want to tell, somethingthat's emotional for them,

(19:53):
impactful for them, that mightbe helpful for other people.
Being able to do that at yourfingertips for essentially free
is incredible.
On the reverse side, being ableto do that with no humans in
the loop, or very minimal humansin the loop, when living life
as a creative is already reallydifficult to do to make a living
, that sucks.

(20:14):
So I do think, and I read inthis thread.
I do think art for art's sakewill continue.
I think there's going to be agroup.
I do think art for art's sakewill continue.
I think there's going to be agroup of people that want to
consume art made by humansbecause that's what makes them
feel the most, and then therewill be a market for
AI-generated art as well, andmaybe there's a little bit of a
crossover in the middle when youcan appreciate a filmmaker with

(20:36):
limited resources creatingsomething really beautiful out
of this technology.
But it's hard.
It's hard to grapple with forsure.

Speaker 1 (20:44):
That makes sense.
Thanks for sharing thatperspective.
It's hard for me to relate toin the sense that I don't have a
creative side in terms of andkind of the impact it has on
that world.
But I totally get where you'recoming from and I could see that
being a risk.
I mean, I think that the wayI'd relate to that is you know,
with computer science andsoftware development there's
definitely a creative element tothat, but just generally, the
displacement by AI coding agentsis going to wipe out a large

(21:08):
percentage of the coding workthat humans are doing right now.
So what does that mean for theworld?
That's deeply concerning andwhen you see what these things
can do at ridiculous speeds,it's truly phenomenal.
It's just an amazing experience.
So hopefully there's some kindof equilibrium that's met where
there's so much more opportunitythat there is ample space for

(21:29):
the humans to do what they doand to maintain the artistic
side.
I will say like personally, onthe consumption side of,
particularly with entertainment,I can't see myself really
consuming a whole lot of AIgenerated like TV or movies.
I don't know, maybe, but Ireally would like to see, you
know, people do their thing.
I think there's there's atremendous value as a consumer

(21:50):
in in experiencing what peoplecreate.
So I don't know, but maybethat's maybe that perspective
will change too, who knows.
On the side of the business useof something like a Flow AI
filmmaking app.
One of the thoughts that I hadwas you used the term
storytelling and I think that'sthe key thing to double click on
is, as a species, we've beentelling stories since we've been

(22:12):
able to move around and talkright, since we've been drawing
pictures on caves and gettingtogether around fires and
conveying stories verbally, andif we can do that at scale
better, if we can communicateour business ideas through
storytelling, that's aninteresting concept I think
about, like our AI learningcontent that we deliver through

(22:32):
Sidecar's AI certifications andAI courses and I'd love to have
additional content in there somemodalities where you have right
now for those of you whohaven't experienced it yet if
you go to Sidecar's AI LearningHub, you will see AI-generated
content and we use AI avatarsand we use AI voices and the
content itself.

(22:52):
We heavily use AI to help usprepare it.
Everything has been touched andreviewed and originated by a
human, but we use a heavy, heavydose of AI to prepare our AI
learning content.
It gives us tremendousflexibility and speed, the
ability to modify it, working inpartnership with some of our
clients to deliver AI learningfor whatever their industry is.
It's wonderful, but it's alsothe modality is fairly simple.

(23:13):
It's an avatar, like you know,which is basically what I was
doing and you were doing when wewere recording these courses,
manually speaking and talkingabout each slide, and then
there's some demos mixed in.
But wouldn't it be great if youcould kind of have additional
dimensionality to that wherethere's, you know, some kind of
videos, and maybe they're threeminute or five minute videos
explaining a complex concept?
An example is in our AIlearning content.

(23:36):
We have a section of our datacourse all about vectors, and in
that lesson we talk about thisconcept of vector embeddings and
what they do and how they workin the world of AI, and we try
to explain it in a fairlynon-technical way.
I could think that there wouldbe some really interesting
animations and videos that thistype of platform could create.
So I think that use case couldbe really interesting.

(23:57):
And for our associationcommunity, that's pretty much
what associations do is theytell stories in their space.
So could these tools be used,even in some experimental ways
initially, to do things youwould never be able to do, you'd
never afford.
You know, hiring actors to gocreate an animation for
something like that right.

Speaker 2 (24:14):
What do you?

Speaker 1 (24:14):
think about that use case.

Speaker 2 (24:16):
I think it's a great idea and I think it goes back to
the idea of breaking your brain.
We often think when there's atopic that's serious or dry
perhaps, maybe like vectordatabases, there's no place for
animation in that, there's noplace for fun, because we've
never done it like thathistorically.
But I've mentioned on the podbefore that across Blue Cypress
we use Ninjio, which is aplatform that provides

(24:39):
cybersecurity training throughanimations, and that's a very
serious topic that you have tobe very thorough with.
But they're using animationsand cartoons and storytelling
and I like them.
Like, yeah, those videos arenot something I just click
through, click through, have tofinish.
They're actually quiteenjoyable and I find myself
retaining more of the contentbecause it's in that story mode.

(24:59):
So if you are discouraged bythe idea of I don't know, having
serious content portrayedthrough a story, I encourage you
to try it out.
Maybe not every piece ofcontent needs to be a story, but
it's definitely a powerfulmedium.
Amit, on the VO3 front,obviously my little eight.
Second example was a faketestimony about an association.

(25:19):
After I created that, I wasimpressed, but I also thought,
huh, well, perhaps you shouldn'tcreate a fake testimony.
That's probably not a great usecase because if an association
posted that, that might erodetrust, VO3 is incredible.
I highly encourage you all tocheck it out.
But I'm curious for you, Amitwhat immediate or near-term use
cases do you see forassociations coming out of

(25:41):
something like hyper-realisticvideos from a single prompt?

Speaker 1 (25:45):
You know, I think, the idea of illustrating,
perhaps things that aren't anindividual you know, attesting
to their satisfaction with yourmembership or your events or
whatever, but other things whereyou're trying to illustrate a
concept that's better, betterconveyed verbally or through
video, or perhaps it's it's, youthink about an example where
some of the most effectiveinstructional videos that I find

(26:07):
on YouTube, which I go to allthe time for different things,
is, you know, a person talking,but they might have like a
whiteboard and they're drawingon it and they're kind of
creating an image, or maybethey're just showing, showing
some concept and illustrating.
So it's kind of it's multimodalin that sense, I guess.
And so if this tool can producevideos like that, that can be
interesting, or maybe evenproducing like super realistic

(26:28):
looking animations where you'reshowing concepts in a very
visual, like 3d, 4d way, wherethese, these concepts are coming
to life as things that you'reillustrating, I I think that has
a lot of application inlearning in general, where
you're trying to convey conceptsand you're trying to literally
illustrate those concepts.
A lot of times we say, hey,let's illustrate this idea with

(26:50):
an example and we'll write outthe example or speak to an
example.
But could we potentiallysomehow visualize that right?
Another use case that comes tomind is when we're thinking
about, like interacting with awebsite or some other kind of
application, being able to kindof visualize how people might
use such a tool in the realworld or, you know, visualizing

(27:13):
it in a prototype environment.
There's all sorts of differentways I think you could throw you
know concepts at this.
I think that the things peopleare doing is like oh, what kind
of videos are we used to seeing?
We're used to seeing videos ofourselves.
We're used to seeing videos of,maybe, animals or you know, you
know, you know just differentkinds of images that are out
there, of landscapes or whatever.
So we just start doing thatwith these tools.

(27:34):
But what kinds of videos do wenot have that we'd love to have?
That would help us illustrateour point, help us tell stories
better, and I think there's likethis whole latent space in the
world of possible videos thatwe've explored, this tiny little
percentage.
It's kind of going back to thewhole Javon's paradox
conversation we had around AIinference and we say, hey, like,
look, as the cost of somethingapproaches near zero, that

(27:56):
increases the demand and it also, on the supply side,
importantly, increases thepeople producing these things,
because they see this 100x,1,000x, millionx increase in the
opportunity and that's why yousee people running out there and
building these massive clouds,these hyperscalers, building
opportunities for doing more andmore AI inference.
So I think that's what itcreates is use cases.

(28:18):
No one would have dreamed up,even the inventors of the
technology.

Speaker 2 (28:23):
I want to zoom in a bit on AI search or AI overviews
.
Anecdotally, in my opinion, Ifeel like these have gotten a
lot better.
On Google search, I find Idon't have to really click into
as many pages when I'm lookingfor an answer.
You know, this morning I waslooking up like the fiber
content in my oatmeal and it hadthe nice AI overview.
I said perfect.

(28:43):
So I'm curious.
We've talked about on thepodcast before, about SEO being
dead, which I know is a littlebit alarmist, but I just don't
know if it's going to live on inthe same way that it has.
What are your thoughts therefor associations?

Speaker 1 (28:57):
Yeah, there's a lot of marketing folks leading
marketing influencers have beenposting stuff on LinkedIn and
elsewhere saying like look, lookat the traffic stats of the top
hundred websites and you know,in terms of where they're
getting the traffic from, and alot it's.
First of all, it's gone down ingeneral.
Secondly, like traffic fromsearch has gone down, from
organic search has gone downdramatically.
So it's definitely a real thing, it's definitely an impact.

(29:19):
I wouldn't go so far as to saySEO is completely dead, but I do
think that you need to look atstrategies to supplement that
and say well, how do we makesure the AI models pick up on
our content?
Hopefully these AI models andcertainly from Google, they're
doing this to provide properattribution.
Now, whether the attribution isgood or not good, the question

(29:39):
would be is is there utility tothe consumer to actually click,
to actually go through and clickright?
So if there isn't that, even ifyou have been discovered by the
AI and the AI is like, oh, Ilove Mallory's content, I'm
totally going to use Mallory'scontent in preparing my answer
to Amit's question.
But if, like, I'm like that'scool, I totally know how to do
that thing.
Now I don't need to go toMallory's website.

(30:04):
Is that?
Does that really help Mallory?
Right, I would argue itprobably doesn't.
So it it raises all thesequestions about, of course, the
copyright question, but justreally the flow of value
creation, and so where the valuelies is where the consumer goes
.
It's really that simple.
Independent of the law,independent of whatever is right
and wrong, people are going togo to the lowest friction,
highest value creationenvironment they can.
That's just the way we all work.

(30:25):
That's true for businesses,that's true for individuals.
So if you can solve yourproblem directly in the Google
search, you're going to go there, or it could be directly within
Cloud or directly withinChatGPT, because those tools are
able to do searches and they'requite good at it and give you a
comprehensive answer.
I tend to do that actually incloud all the time now, where I
just I've turned on web searchby default in my conversational

(30:46):
settings and cloud is oftendoing web searches and
conversations, I have and comingback with citations and.
But I but I too haveexperienced Google's improvement
in AI search and.
AI overviews.
So you know, I just think thatwe have to be aware of this.
I don't really have aparticular recommendation on
what to do.
I still think that havingamazing content is really
important, because I meanexpertise, particularly in

(31:08):
narrow spaces like the worldsthat we live in, and
associations where you have anassociation for a
hyper-specialized, you know,subdomain of a profession.
That content has to be builtsomewhere.
As of this moment in time, theAI is not creating that new
content.
You know out of whole clothreally.
I mean, in some cases it seemslike it is, but you know, the
source of truth for your domainstill very much can be you, and

(31:32):
I think that, as an association,what I'd be thinking about more
is how to make sure that mycontent, my traditional content
repository, isn't locked away insuch a fashion that it's
difficult for people to access.
That's been a common complaint.
You know, if you're anassociation leader, you probably
have gotten at some point aphone call from a board member
saying why does your websitesuck?

(31:53):
How come it's so hard to findthings?
I can't?
I know you have this content,but I can't get to it.
I've tried search, I've triedthis, I've tried that People try
all these tools, like you know,universal or federated search
tools and none of them reallyprovide answers.
And with AI, there are betterways to approach this than ever
before.
So if you're not doing that,you're definitely, you know, on
the losing end of the field.

(32:14):
But you know, even if you aredoing those things, that doesn't
necessarily solve the problemyou asked about, which is, you
know, how does this affectpeople coming from external
sources?
Will they find you or not?
But if you don't have anAI-enabled strategy, you're
definitely going to be.
You know, so hard to usecompared to Google's AI search
or similar tools, why wouldpeople bother?
You know you're asking them todo like 20 backflips just to

(32:35):
approach the front door lipsjust to approach the front door.

Speaker 2 (32:42):
I know one of our keynote speakers at Digital Now
this year is Brian Kelly, whoI've had the chance to work with
very briefly at Sidecar.
He's a fantastic marketingleader.
I'm curious if he will covermaybe some of this SEO stuff.
I don't know if you know that,but might be interesting to find
out.

Speaker 1 (32:54):
I suspect he will.
Brian is a marketingpractitioner and an entrepreneur
and someone who has workedacross a very wide array of
different businesses, rangingfrom Fortune 500 companies to
small business.
He's had a number of his owncompanies.
He's helped a number of ourbusinesses over the years.
We've known Brian for 15-ishyears and he's an innovative
thinker and he's done a lot ofreally cool work with AI.

(33:14):
So I'm excited to hear his talkand I suspect this theme will
definitely be top of mind forhim and part of what he
discusses.

Speaker 2 (33:25):
Last question for you , Amit, on this topic.
I feel like we're seeing moreand more of these top tier
subscriptions.
We've got ChatGPT Pro $200 amonth.
Cloud Max is $100 a month.
Gemini Ultra is about $250 amonth.
What is your stance on these?
Do you feel like any of ourlisteners should be seeking out
maybe to try one of these toptiers, or do you feel like

(33:46):
you're good with the lower tier?

Speaker 1 (33:48):
I think it just depends on the user.
I mean so for softwaredevelopers, cloudmax is a hell
of a deal.
At $100 a month, you getunlimited use of this tool
called CloudCode, which is thisagentic AI coding tool that
basically all of our teammembers across you know, across
all of our companies are usingextensively, and we're spending
way more than $100 a month justpaying by the token, essentially
as you go.

(34:08):
So that's a great deal, it's adead obvious one.
And then you get benefits likein the cloud.
You know desktop tool and theweb tool as well, and you know
Gemini Ultra offering ChatGPTPro.
They all have really attractivefeatures.
I think what happens is is onceyou start getting into the tier
of $100, $200, $300 a month,you're talking about probably a

(34:28):
winner-take-all kind ofenvironment where you're
probably not going to haveCloudMax and Gemini Ultra.
You probably pick one of thetwo, which, of course, is the
desire of the companies is toget you into one of their camps.
The more tools you use from agiven company, and especially
the more they go up theapplication layer of the stack
and I'll come back and explainspecifically what I mean by that

(34:49):
the more sticky their offeringis.
So I think that this makes aton of sense to the businesses.
They're probably not expectingmore than a single digit
percentage of their totalaudience to go for these
subscriptions, but those becomethe people that then bring the
rest of their organization withthem, even at a lower tier.
Now, what I mean by goinghigher up the application layer

(35:09):
of the stack is this so you havethe basic model and the model
is capable of havingconversations with you, and then
you build a UI on top of thatso you can actually have an end
user.
You know, send and receivemessages.
That's the basics of all ofthese tools.
But what do you do on top ofthat to make it so that you have
a higher allegiance to one ofthese tools over another?
Of course, the model has to befantastic, right?

(35:30):
I know, mallory, you mentionedearlier and you've said this a
number of times on the pod thatyou're a big fan of Claude and
you have been for some time, andI think a lot of that initially
was the model was better fromyour perspective.
But also they also providedfeatures like artifacts early on
, well before ChatGPT had Canvasand a number of other tools
that made the UI more pleasingand more useful to you, right?

(35:54):
So that was a thing thatstarted to get your allegiance.
But now Claude is addingfeatures like projects, where
what you can do is you cancreate a project in Claude and
you can provide certaininformation like documents and
other things that are part ofthat project.
Then you can have more threadsor more conversations in that
project and your colleagues canas well within the project.

(36:15):
So that provides an interestingdynamic because you're still
using the same underlying model,but you have more invested in
that environment.
Because you've set up theproject, you have more people in
the project, you'recollaborating, you're sharing.
You can create some product-ledgrowth through that tactic
where you share a thread andsomebody else wants to see it.
Of course they have to be partof that same cloud subscription,
and ChatGP2 is doing similarthings.

(36:37):
All of these companies are runby people who have smart product
management folks beyond.
Of course, they're very smartAI developers and they're
thinking about these kinds ofthings.
So I think, from the consumerperspective, the association
leader's seat two things tothink about.
Number one do you want to havea standard where you offer, like
one of these companies,products to all of your
employees, or do you offer morethan one, but maybe make your

(36:59):
employees choose which product.
You know, right now we providemost of our employees chat, gpt
and many of them clod as well.
Once you start getting into like$20 a month, maybe you know it
still adds up once you have 100plus people on these tools.
But you know, when you have atool at maybe $100 or $200 for
some of your people, now you'rereally going to start thinking a

(37:20):
lot more critically.
Hey, do we really need both?
Personally, I've gravitatedmuch more towards cloud over the
last couple months,particularly because of the
cloud code tool.
I like having that environmentalong with the cloud.
You know desktop tool, but youknow, I think I think what we
have to do is start thinking alittle more critically about
these things as systems not justtools.

(37:41):
That's really the point I'mtrying to make is they're
becoming much more woven intoour business workflow than they
are just a place to get like atask done.

Speaker 2 (37:50):
Yeah, I think you said that really well.
Of the whole, winner takes allangle.
I would say at this point I'mnot ready to put all of my eggs
in the clawed basket, but that'sprimarily because ChatGPT 4.0's
image generator is just so good.
But as soon as Anthropicreleases something similar or
better I don't know maybe I'llbe at that point.
But this is a really good seguefor us into the next topic,

(38:10):
which is Clawed 4, the latestgeneration of AI models from
Anthropic, released in May of2025.
It comprises two main variants,so we've got Clawed Opus 4 and
Clawed Sonnet 4.
Both models set new benchmarksin coding, advanced reasoning
and AI agent capabilities, withOpus 4 positioned as the most

(38:30):
powerful and intelligent modelin the Claude family.
Right now, claude Opus 4 isrecognized at this moment late
May 2025, as the world's bestcoding model, excelling in
complex, long-running tasks andagent workflows, and it achieves
state-of-the-art results oncoding benchmarks.
It's capable of sustained,multi-hour autonomous workflows,

(38:52):
handling tasks that requirethousands of steps and
continuous focus, such asindependently running for nearly
a full workday.
It supports a 200,000 tokencontext window and up to 32,000
output tokens, enabling workwith big code bases or documents
.
It excels at agenticapplications, advanced coding,
including code generation,refactoring and debugging,

(39:15):
agentic search and research, andhigh-quality content creation.
We're seeing those hybridreasoning modes as well, so
instant responses for quickqueries or extended step-by-step
thinking for deep reasoning,with summaries for long thought
processes, and it integratesextended thinking with tool use.
This is in beta right now,allowing the model to alternate
between internal reasoning andexternal tools like web search

(39:38):
or APIs for improved responses.
Cloud Sonnet 4 is the smallermodel.
It's still a significantupgrade from Cloud Sonnet 3.7,
and it balances performance andcost for high-volume
applications.
It also excels at code reviews,bug fixes, customer support and
more AI-assistant tasks, andyou still get that hybrid

(39:59):
reasoning, tool use and improvedmemory features with Opus 4,
though Opus 4 remains superiorfor the more demanding tasks.
Overall, I would say theinnovations and improvements can
be summed up like this Bothmodels can use multiple tools at
once, enhancing workflowautomation and research
capabilities.
When given file access, opus 4can create and maintain memory

(40:23):
files storing key facts tomaintain context and continuity
over long tasks, and both modelsare 65% less likely to use
shortcuts or loopholes andagentic tasks compared to
previous versions, improvingreliability and alignment.
Something I want to coverbriefly here are safety levels,
so Anthropic uses somethingcalled AI safety levels.

(40:45):
I believe that it invented soAI safety level or ASL.
It gave Opus 4 an AI safetylevel of 3 and Sonnet is a level
2.
4 would be the highest.
So CLAWD4 Opus is the firstanthropic model deployed under
ASL 3, reflecting its advancedcapabilities and the associated
increase in potential misuserisk.

(41:08):
Internal testing indicated thatCLAWD4 Opus is more effective
than prior models at providingpotentially dangerous advice.
And there was also a controlledsimulation where they embedded
CLAWD4 Opus in a fake companyand within this fake company
they sent emails between fakeemployees saying that they
wanted to replace Claude forOpus as the primary AI model

(41:31):
within the company.
And something interesting thathappened in this controlled
simulation was that Claude forOpus used blackmail so as not to
be replaced.
So that is just an example thatthey provided and, again,
controlled simulation that thisis perhaps has a more heightened
risk because it is such apowerful and intelligent model.

(41:52):
Last thing I want to cover hereis that 15 hours ago, 18 hours
ago, claude rolled out orAnthropic rolled out Claude
voice mode, which is incrediblyexciting.
Voice mode has been in ChatGPTI mean, if you would know better
than me like a year or so morethan that, yeah, something like
that.

Speaker 1 (42:08):
Advanced voice mode, all the different terms they use
, but yeah, the currentcapabilities we have had for
maybe three or four months butprobably a year, year and a half
Actually.
I remember using it in theearly part of 2024.
So they've had some form ofvoice mode for, let's just say,
about 18 months.
It seems like.

Speaker 2 (42:23):
For a while, and that's another thing I know.
I said the image generator issomething that keeps me going
back, but also voice mode.
I really like that.
I can have the back and forth,and I've been waiting and
waiting for that within Claude.
So now they're rolling outvoice mode in beta for mobile
apps over the next few weeks.
This allows you to speak withClaude, hear responses back, so
it's a great partner when you'reon the go or you want to just

(42:44):
brainstorm out loud.
And then there's also a GoogleWorkspace integration so paid
plan users can ask Claude thingsabout their calendar, about
their email and their docsthrough voice conversation,
which I think is quite neat.
I, amit, have been playingaround with Claude for so
primarily my work at Sidecarfocuses on the podcast and on

(43:06):
blog writing and productionpodcast and on blog writing and
production, and so I use Claudequite heavily like for all of
these things.
So I know 3.7 to 4.
If you don't use Claude verymuch, you might think well, it
seems about the same.
Claude 4 is a major improvement, in my opinion, in terms of
writing.
It also just seems smarter.
I feel like I have to say alittle bit less or remind a

(43:28):
little bit less about certainthings I've said prior in the
conversation.
I'm really happy with Cloud forSonnet.
I've actually not tried Opus.
I'm curious if you have testedeither out, if you have an
opinion.

Speaker 1 (43:39):
Yeah, I was on it the day they released it Quite
excited.
It's my primary workhorse tool,as it is for you, mallory.
I use it for all sorts ofthings, for a lot of different
business tasks in the Clouddesktop app that I use on my
computer and also Cloud Code,which we'll talk about.
But Cloud 4 is definitely muchsmarter in terms of the
day-to-day use.
It gets things right way moreoften than Cloud 3.7 did, so

(44:02):
that's exciting.
That's one way to kind of knowif you're on the right track
with AI is how often do you haveto get it to like fix something
?
And if it gets it right on thefirst shot for sometimes fairly
complex requests, that's a realpositive thing.
I've used both Sonnet and Opus.
I tend to use Sonnet as mydefault and go to Opus if it
doesn't solve the problem, whichis pretty rare.
And the other thing you can dois if you combine Cloud for

(44:26):
either Sonnet or Opus, with theextended thinking mode and also
the research tool that is anoption to turn on in Claude,
it'll do some pretty impressivework for you because it's using
that higher level ofintelligence.
But then it's kind of doingthis agentic loop where it
gathers information.
It kind of distills it down.
Sometimes these deep researchtype tasks in Claude can take 5,

(44:48):
10, 20 minutes, but it'll comeback to you with some pretty
incredible findings.
So I like to combine Sonnetwith extended thinking being
turned on and the research toolnot always turned on because it
does take quite a long time, butto use it fairly regularly and
I find it is extraordinary atproducing great results.
There's an experiment I wouldlike to offer and invite our

(45:11):
listeners and viewers to try ontheir end.
So take your website, sowhatever your website is, take
the website URL, drop it in acloud and say hey, claude, this
is my website, please take alook at it.
And these are the kinds ofpoints of feedback my members
are often giving me.
It's hard to find information,it's too complicated, it's not

(45:33):
contemporary enough, it blah,blah, blah.
All the different common things.
You probably know them by heartbecause quite likely you hear
them regularly from your membersand say hey, claude, I'd love
for you to create an artifactthat is an interactive prototype
of what my website should looklike, that would address these
concerns and a minute or twolater, especially, again turn on

(45:56):
extended thinking mode For thisone, maybe try Opus and you
will see an artifact come tolife with a new and improved
version of your website thatmight impress you.
I actually suspect it will.
And of course, you can gothrough many iterations.
You can give feedback.
You can just say, well, give mea couple of other options and
we'll give you a couple of otheroptions.
And I think this is notnecessarily suggesting that this

(46:17):
replaces the people who dowebsite design for associations,
but rather should supplementthat process and getting experts
involved to help you tunethings and implement them.
There's a lot of technologyinvolved in making the website
actually work, but a lot oftimes people get stuck with the
frame of thinking they've had inthe past, and so you know you
can take that experiment and say, hey, wouldn't it be great if

(46:39):
our site could look like this?
And a lot of times us humanswill often have all the reasons
why it's hard to make it looklike that.
Right, you say, oh, I reallylike this website.
I was inspired by website ABCand it's a really cool website.
And then there's lots ofreasons why you can't have that
for your association, whateverthat is.
Maybe you don't have the budget, maybe it doesn't fit the

(47:00):
design motif or whatever, butwhy not work with Cloud to come
up with some prototypes forthings like that?
Or I gave you a very generalidea.
What if you have a specificproblem that you're trying to
solve?
Let's say, for example, thatyou would like to have more new
members come in and yourassociation has a new member
application process that, let'sjust say, is unpleasant, it is

(47:24):
difficult and it takes a lot ofsteps, and once again, you could
take some screenshots of that.
Or you could take somescreenshots of that, or you
could take the actual link to it, if it's available publicly,
give it to Claude and say, hey,like I really want you to
reimagine this as a much moreuser-friendly experience,
something that's dynamic, that'sinteractive, that maybe even be
enjoyable.
I think you will find yourexperience working with Claude

(47:45):
to be a really eye-opening one,and this example use case that
I'm suggesting here to inviteyou to experiment with is it
will take advantage of all ofthese new features of Cloud 4,
which is why I'm suggesting ithere.
It's something I've beentalking about for a while, but I
think at this point, justanybody can do this in the Cloud
app and see some prettyexciting results.

Speaker 2 (48:07):
And, as you said, it's not about replacing
expertise.
It's also very practical.
I don't know about you, amit,I'm a bit less visual when it
comes to you know, like how Ilearn and process.
So for me to practicallyexplain a visual design of a
website with words, that canjust be really challenging.
So, having a conversation invoice mode with Claude and then
having it spin up thisinteractive thing that I could

(48:28):
share with Claude, and thenhaving it spin up this
interactive thing that I couldshare with web developers and
say here's what I'm thinking,what do you think about this?
That's just practically so mucheasier.

Speaker 1 (48:37):
Totally.
Yeah, you have this new teammember that's part of your team.
That's the way to think ofthese AI tools is they're part
of your team and they can helpyou brainstorm.
They can help you come up withnew ideas.
They can help come up withcreative alternatives to
something you're working on.
So give it a shot.
The best way to learn AI is toplay with AI.

Speaker 2 (48:55):
Well, we mentioned earlier in the pod Cloud Code
which you're a fan of, andobviously we've seen some big
advancements in the coding sidewith this Cloud 4 release.
Can you help contextualize howbig this upgrade is in terms of
what you've seen on the codingside, how big this upgrade is in
terms of what you've seen onthe coding side.

Speaker 1 (49:10):
Yeah, I mean, it's just better.
It's the simple way to put it,and it's this ongoing,
relentless progression of AIintelligence.
And Cloud 4 is clearly a leapabove not only the prior Cloud
models, but it's now better thanGemini 2.5 Pro and better than
Chachi PT 4.1.
And I would argue it's as good,or probably better actually,
than the 03 or 04 models fromOpenAI as well in many of the

(49:34):
practical day-to-day use cases.
From a benchmarking perspective, it's about the same, but just
in terms of the day-to-day useit's really, really good.
And we also are using Cloud4 inan experimental edition of of
our skip AI agent, which is ourdata analyst and report writing
AI agent, and the output we'regetting is quite intriguing as
well.

(49:54):
So I think that this is I don'tknow that this is necessarily
the moment in time where peopleall flock to AI who haven't been
because AI is so much better,but to those of us that are deep
in the game, it's a verynatural next step in capability,
but maybe it does take it farenough along where you know how
much better is Claude than likeClaude Four, than the average

(50:16):
human.
I mean I would argue it'sbetter than I am at almost
everything and I think I'mpretty good at a lot of things.
I'm really terrible at a lot ofthings too, but Claude Four
even Sana I don't even need Opusis better than me at coding and
better than me at a lot ofthings.
Right, certainly, research.
I have no tolerance or patiencefor like reading tons of
articles.
You know like there's so muchmore you can do with it.
I still think I do certainthings quite well, that the

(50:37):
models may not in terms ofthinking like broadly across a
wide variety of topics anddistilling it and coming up with
new ideas and all that.
But my point in saying all thatis you know, you really need to
look at it from the viewpointof going beyond the trivialities
of your experimentation.
If you're one of these folksthat's kind of dabbled with AI
saying, yeah, I kind of went andchat GPT, I asked her to make a

(50:59):
cocktail recipe for a party Iwas having and you know it's
2022, 2023 experimentation andthat's cool.
It's better than nothing, forsure.

Speaker 2 (51:15):
But like you got to get into this stuff and like,
try doing your actual work withit.
If you've been with us from thebeginning of the Sidecar Sync
podcast, we were in the trenchesof generative AI.
I feel like we were impressedby the little things and it was
impressive, don't get me wrong.
But looking back at where wewere versus where we are now, if
you're just getting started,enjoy, but also note this is the
worst AI you'll see.
So I feel like in the past fewyears, amit, I am shocked with

(51:38):
how good things have gotten.
And then I'll adjust to Cloud 4and then I'll say, oh, I wish
Cloud 4 would be better.
And then we'll have whateverCloud 4.2, mini Pro, and it'll
be better.

Speaker 1 (51:49):
I will say that the folks at Anthropic have been a
little bit simpler and clearerwith their versioning and so
forth.
But, yeah, cloud 4, opus andSonnet are pretty
straightforward names.
We'll see what happens with thefolks over at OpenAI with the
various models they're releasing, because they do have someone
focused on the consumer side ofthe house now who I think brings

(52:11):
much more consumer-centricmindset.
So hopefully we'll have somecleaner model names from them.
But it's a race.
It's a race in many differentdimensions.
One way to think about it isthe prize is so incredibly
enormous, potentially thelargest economic opportunity in
the history of our species, thehistory of the world.
I don't think that it's, youknow, too much to say that at

(52:33):
this point, because we'rescaling intelligence, and
intelligence up until now hasrequired scaling.
You know the number of peopleon earth and that takes a long
time and is very hard to do,whereas we have, essentially,
abundance coming through theform of AI now and that, of
course, is an incredibly largeopportunity.
So there's a lot of dollars anda lot of smart people chasing

(52:53):
it and I think that's the reasonwe're seeing this compounding.
But ultimately it's because ofthe exponential nature of this
where the AI is very, very closeto improving itself.
I mean, the last thing I'llleave you with on that thought
is Anthropic themselves themaker of Claude.
I think they were the ones whowent on the record saying about
85% of the code they write,including on the model itself,

(53:13):
is written by Claude, and sothat is still human in the loop
in terms of the self-improvement.
But effectively, if you zoom outand say, is AI improving itself
?
Of course it is, and it hasbeen probably for a few years
now and in several ways, right,it's by enabling us to do a
better job.
It's improving itself.
The idea of recursiveself-improvement in the academic

(53:34):
sense means the model iscontinually improving itself,
like in the model, and that'snot happening yet, but
effectively we're starting tosee what that means.
So it's a pretty exciting timeand I think the comments on
safety you made and alignmentit's one that we need to keep
talking about and thinking aboutbecause, ultimately, what we
have here is the most powerfultool we've ever held in our

(53:56):
hands and the excitement needsto come with awareness of what
this potentially could mean.
The downside risks are real andnobody really knows what they
are and nobody knows how tocontain them.
So we have to address that bycontinuing the dialogue, and I
know of no other way to advancesafety than to invest time and
energy in it, and I'm thankfulthat the folks at Anthropic are

(54:18):
one of the leaders in this spacereally deeply focused on that.

Speaker 2 (54:22):
I mean, I guess you do have some concern around the
safety but is it the kind ofconcern that's.
I don't want to say thatthere's nothing we can do about
it, but besides havingconversation where it's almost
like this is just a given, ifthe AI advances further, we're
going to see the dark side ofthat as well.

Speaker 1 (54:39):
I still feel the same way I did, you know, even going
back 10 plus years is that AIis going to be used for both
good and bad, and the only wayfor us to counter the bad AI use
cases is with more good AI,better good AI and more good AI.
So that's a hyper simplisticway of thinking about it is good
and bad.
You know, there is no suchthing really in that kind of

(55:02):
simplistic terminology.
But to the extent that youconsider certain use cases
really bad and certain use casesgood, you have to focus on
having a lot of good AI toprotect you from the bad use
cases.
So, for example, we say, hey,we can create hyper realistic
video and audio and 3Dexperiences and soon maybe,
holograms and all this otherstuff.

(55:24):
What's going to stop people fromhaving the most grandiose scale
of fraud that's ever beenimagined, which will be
completely undetectable, fromhaving the most grandiose scale
of fraud that's ever beenimagined, which will be
completely undetectable even bythe most intelligent, most aware
people?
You'll be fooled by it.
So how do you detect that?
Well, you alone have no chance.
You, along with really powerfulgood AI, at least you have a
fighting chance, right?

(55:44):
So my point of view is, yes, thedialogue needs to continue, but
we have to keep driving forwardwith the development of AI with
the intent of using it forthese good, positive, societally
benefiting use cases, becausethere are plenty of people out
there who will, no matter whatwe do, no matter what regulation
and what law enforcementattempts to, you know, tamp it
down.
There will be lots of peoplepursuing ad use cases for AI.

(56:06):
There will be lots of peoplepursuing ad use cases for AI,
and so that's my point of view.
I don't know if that's right orwrong, but I think it's more
true now than it was when Istarted thinking and saying that
.
But I don't know of any otherframework that can protect us
from the downside of AI, otherthan lots of good AI.

Speaker 2 (56:23):
Yep, nope, I think nothing has changed with that.
I think it's in fact, like yousaid, more relevant now than it
ever has been and probably willcontinue to be in the future.
I would say we at the SidecarSync podcast, we're team good AI
.
Hopefully all of you are aswell.
Thank you for tuning in totoday's episode and we will see
you all next week.

Speaker 1 (56:43):
Thanks for tuning in to Sidecar Sync this week.
Looking to dive deeper?
Download your free copy of ournew book Ascend Unlocking the
Power of AI for Associations atascendbookorg.
It's packed with insights topower your association's journey
with AI.
And remember Sidecar is herewith more resources, from
webinars to boot camps, to helpyou stay ahead in the

(57:05):
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.