All Episodes

May 1, 2025 53 mins

Send us a text

In this episode of Sidecar Sync, hosts Mallory Mejias and Amith Nagarajan take a deep dive into Model Context Protocol (MCP), a revolutionary standard making it dramatically easier for AI to interact with data, tools, and systems—no code required. Mallory shares her personal setup experience with Claude for desktop, walking through a live example of using MCP to automate file organization. Amith expands on why MCP is a game-changer for association leaders, how it enables business users to go from idea to execution quickly, and why data liberation is the missing piece for many organizations. The duo also breaks down new insights from Anthropic’s Economic Index, which reveals that AI is currently used far more for augmenting human effort than full automation. It’s an episode rich with practical insights, technical clarity, and a peek into the fast-arriving future of AI in the workplace.

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
https://learn.sidecar.ai

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecar.ai/ai

📅  Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/ 

🎉 More from Today’s Sponsors:
CDS Global https://www.cds-global.com/
VideoRequest https://videorequest.io/

🛠 AI Tools and Resources Mentioned in This Episode:
MCP Quickstart Guide ➡ https://modelcontextprotocol.io/quickstart/user
Claude for Desktop ➡ https://www.anthropic.com
Zapier ➡ https://zapier.com
Cursor ➡ https://www.cursor.so

Chapters:
00:00 - Welcome to Sidecar Sync and What’s Ahead
01:02 - Catching Up: Spring Break and Autonomous Driving
05:02 - What Is Model Context Protocol (MCP)?
07:06 - How MCP Works: Mallory’s Simple Explanation
08:11 - Step-by-Step: Setting Up Claude Desktop With MCP
12:42 - Live Demo: Moving Files Using Claude and MCP
18:01 - Why Associations Should Care About MCP
26:44 - MCP and the Role of AI Data Platforms
33:07 - What Is the Anthropic Economic Index?
35:59 - Key Findings: Augmentation Over Automation
39:34 - Amith on Real-World Automation Examples
44:46 - Vibe Coding: Cool Concept or Caution Zone?

🚀 Sidecar on LinkedIn
https://www.linkedin.com/company/sidecar-global/

👍 Like & Subscribe!
https://x.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecar.ai/

Amith Nagarajan is the Chairman of Blue Cypress https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory:
https://linkedin.com/mallorymejias

...
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
At the moment.
As good as these tools are atbuilding working software, they
can absolutely build softwarethat creates problems for you
that you don't realize right.
So you have to be verythoughtful.
Welcome to Sidecar Sync, yourweekly dose of innovation.
If you're looking for thelatest news, insights and
developments in the associationworld, especially those driven

(00:23):
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host
.
Greetings everybody and welcometo the Sidecar Sync, your home

(00:45):
for content at the intersectionof artificial intelligence and
associations.
My name is Amit Nagarajan.

Speaker 2 (00:53):
And my name is Mallory Mejiaz.

Speaker 1 (00:55):
And we are your hosts , and once again, we're here to
deliver the good news of AI inthe world of associations and
some exciting topics here todaythat we're really, really
excited to jump into momentarily.
Before we do, let's just take aquick pause to hear from our
sponsor.

Speaker 3 (01:13):
Video request Easily collect, edit and share videos.
Collect videos from yourcustomers, members or volunteers
by simply sending a link.
Contacts can enter theirinformation and record a short
video Once your video iscomplete.
Our platform makes it simple toshare your video on social
media, on your website or inmarketing campaigns.

(01:36):
The possibilities are endless.
Get started for free today.

Speaker 4 (01:43):
What if you could reduce churn, lower costs and
forecast membership trends allfrom one platform?
Cds Global brings predictive AIto the business of membership.
We help executive teamssimplify operations, protect
reoccurring revenue and uncoverthe secrets to success buried in
your data.
Every missed insight costs youmembers.

(02:05):
Uncover what your data alreadyknows.
At cds-globalcom, slash AI.

Speaker 2 (02:13):
Amit, we've been out of the podcast game for a minute
there.
How are you?

Speaker 1 (02:17):
I'm doing great and I think between your vacation and
my vacation, we've been out ofpocket for a couple of weeks,
but fortunately we had somegreat content recorded that we
dropped over the last week weekand a half, so, but I'm doing
great.
I had an awesome family springbreak trip out to Hawaii, and
you know you can't complainabout going to Hawaii and that
that I would find a reason to.
It was just absolutelystunningly beautiful and had

(02:37):
great family time.
My teenagers were, you know,happy to be there, which was
good, and my wife and I had agreat time too, so it was
awesome.
How about you?
How are you doing?

Speaker 2 (02:46):
I'm doing well, like, like you said, I also had some
travel lined up.
I got to go to Yosemite for thefirst time, which was beautiful
, did some incredible hikes overthere a little bit more
challenging than I thought whenI was going into it, but it was
really fun.
And then I spent a few days inthe Bay Area and then in San
Diego too for a wedding, so itwas a pretty crazy week.
I was just telling you, amit, Ialso I forgot to mention have

(03:08):
you ever been in a Waymo whenyou were in San Francisco?

Speaker 1 (03:12):
No, it's been on my list of things to do.
You tried one.

Speaker 2 (03:15):
I did.
Actually, once we got in werealized we loved it this is my
husband and I and so we tookthree in one day.
But we did play a funny joke onmy mom, who is quite a worrier
and does not know about Waymo,where we recorded a video of us,
and said oh, we're in the Uber,you know, check this out, check
the scenery out, oh where's thedriver?
And then kind of had a video ofthe wheel just moving.

(03:37):
She of course, panicked andsaid texted me Mallory, get out
of the car.
But it was really safe and Iactually will say I might have
preferred it to a human driver,because I felt actually more
safe in that car than I wouldwith a human driver, I think.

Speaker 1 (03:52):
You know, I think, that most of us, even those of
us that consider ourselves very,you know, cautious, diligent,
defensive drivers we're humansand we get tired and we get
distracted and we have things inour minds, you know, whether
it's professional or personal orfamily or whatever.
So AI doesn't get distracted.
Ai is always looking in alldirections and the algorithms
aren't yet perfect, but, youknow, statistically these cars

(04:14):
are actually already much saferthan the average driver and soon
they'll be, you know, saferthan all drivers combined.
So I think that is definitelythe wave of the future.
My youngest is about to get herdriver's license.
I think that's a great rite ofpassage, but, you know, I think,
for those that have a littlebit younger kids, you know,
maybe your kids will never drive, or never need to drive, and

(04:35):
maybe it'll just become a sportand a hobby.
I would say that I am a giantfan of classic cars and have a
couple myself, and I think it'sone of those things where, you
know, it's a different reasonfor having them.
It's not about transportation,it's a hobby, it's a passion.
But I would love to see peoplecontinue to drive in safe and
responsible ways, obviously, butI do think that transportation,

(04:57):
as you know, a way of movingpeople and things around is
going to get dramatically saferand also more efficient.
Because, you know, we humandrivers are not particularly
great at conserving energy.
We tend to accelerate and brakein ways that are unpredictable,
and AI is better at that too.
So I think that's reallyexciting.
I'm looking forward to tryingWaymo.
I'm actually going to be outWest in a couple months, I think

(05:18):
, so I'm going to try to find anopportunity to do that.
But, even more importantly thanAI, tell me what your favorite
hike in Yosemite was.

Speaker 2 (05:25):
So we did.
I would say my favorite one wasNevada Falls.
But which have you done?
That one, amit.

Speaker 1 (05:30):
Yes, I grew up in California, and so I've been to
Yosemite bunches of times.
This is one of my favoriteplaces in the world.

Speaker 2 (05:35):
So we stayed in Curry Village and these little tent
yurts and then we did NevadaFalls.
But the direct line to NevadaFalls was closed and so we kind
of had to take this indirectroute where we went back down
and back up, and I think that'swhat really got me by the end.
I was like I'm so ready to bethere, but once you got there,
the views, the waterfalls wereinsane.
I think the time of year thatwe went was great for those, and

(06:01):
so we had a blast.
We also saw the lower YosemiteFalls.
We did a little bit of bikingaround the valley.
It was a great time.

Speaker 1 (06:05):
Wonderful, that's great.
I love it when people get outto Yosemite.
I haven't personally been inquite a number of years, but
it's one of my favorite placesto go.

Speaker 2 (06:13):
It was incredible, but now it's time to talk about
some exciting AI topics, covermodel context protocol or MCP,
which we've actually covered onthe pod before, but we'll go
more in depth today.
And then we're also going tochat about anthropics economic
index and how CLAWD 3.7 is beingused.

(06:34):
So, first and foremost, modelcontext protocol.
Don't be scared, it sounds abit technical, but it's not too
difficult.
The MCP is an open, universalstandard designed to connect AI
models, especially largelanguage models or LLMs, and AI
powered applications, to a widevariety of external data sources
, tools and services in a secure, scalable and standardized way.

(06:58):
So, traditionally integratingAI models with external systems
required custom code for eachdata source or tool, leading to
fragmented, hard to maintainsolutions.
Mcp addresses this by acting asa USB-C port quote unquote for
AI, providing a single,standardized method for
connecting models to the dataand tools they need, regardless

(07:20):
of the underlying provider orformat.
So think of it this way WithMCP, you could ask Anthropics
Cloud, for example, to look upinformation in your CRM, send a
message on Slack or access fileson your computer, all without
writing a single line of code.
What makes this particularlyexciting is how it massively
expands what AI can do.
Any MCP client can potentiallyconnect to any MCP server,

(07:45):
creating an explosion ofpossibilities.
As an example, heygen recentlyreleased an MCP server that lets
you tell Claude, generate aHeyGen video for every name in
the attached CSV and it works.
To give you a little bit of thetechnical side, just so we can
have a foundation for discussion, it's built on a client server
architecture.
So you've got the host, whichis the AI powered application

(08:08):
like the chatbot or desktopassistant that needs access to
external data.
You've got the client, whichmanages a dedicated connection
to an MCP server, handlingcommunication and capability
negotiation.
And then you've got the server,of course, which exposes
specific capabilities, such asfunctions, data or prompts over
the MCP protocol, connecting tolocal or remote data sources.

(08:32):
So I sought out to test this formyself with some encouragement
from Amit, and I'm glad I hadthat encouragement, because once
I glanced at this quick startguide I thought, huh, maybe this
is something I'll do later, butit's actually not that hard.
So I want to share screen andshow you the process quickly for
setting up MCP on your owndevice.

(08:54):
If you're interested.
I encourage you all to check itout.
It might seem a bit involved inthe process part, but it's
fairly simple and watching whatyou can do in the end is pretty
fun.
So if you drop into any searchengine Claude, model context
protocol, it will pull up thisquick start guide.
I'm going to walk you throughthis piece by piece.

(09:15):
First and foremost, you have todownload Claude for desktop.
That's how you can get set upthe MCP to work for you.
So that's the first step.
I downloaded Claude for desktop, which I have here on the left
side of the screen, and thenwhat you want to do from there
is inside Cloud for Desktop.
Go to settings.
So I'm on a PC right now, soI'm just going to file and

(09:39):
settings and you will see itpulls up this screen.
I'm going to navigate to thedeveloper tab On the left-hand
side.
Mine looks a bit differentbecause I've already configured
this, but it will look somethinglike what I'm showing you on
the right-hand side of my screenand you will just click a
button that says editconfiguration From there.
This is going to create a fileon your computer.

(10:01):
It should automatically pull upthat file.
What you're going to do is openthe file, delete any of the
text that's currently in thereand you're going to copy and
paste in this code that youdirectly pull from the quick
start guide.
So I have Windows, so I copiedand pasted the Windows code and
I dropped it into that same filethat was created on my computer

(10:22):
.
Something important to notehere is that you have to replace
username with the actualusername of your device, just as
a note.
I also want to point out that ifyou see this part, I'm giving
it access with the standard codeprovided from Anthropic.
I'm giving it access to desktopand downloads.

(10:44):
I decided I didn't really wantto give Claude for desktop
access to my desktop folder, soI just deleted this line in my
version and I only left thedownloads folder.
I don't particularly haveanything sensitive in my
downloads folder, so that is thesandbox that I decided to use
From there.
You also have to make sure thatyou have nodes on your computer

(11:08):
.
It'll give you someinstructions.
You just go to this websitelink and you download this to
your computer as well, and thenfinally, here at the end, you
just need to close out Claudeand restart it, and then you
should have access to the modelcontext protocol.
As a quick note, on my computerit was a little bit glitchy, I

(11:32):
don't know if this was just a mydevice thing, so I actually had
to restart my computer and theway that you know this is
working is when you see thislittle hammer here and it says
11 MCP tools available.
That's how I knew it worked.
These are just some of thethings that you can do with it.
So essentially, I'm giving myI'm giving cloud for desktop
access to the files on mycomputer only in my downloads

(11:54):
folder.
So keep that in mind.
I can edit files, I can getfile info, I can create
directories, I can move files,which is the example I want to
show you Read files, readmultiple files, search files,
write files.
So lots of options there.
And now I want to show you howI decided to use it.

(12:16):
So my I will admit my downloadsfolder is not the most organized
.
All the time I'm mostlydownloading lots of images
images for blogs, images forcover art for the Sidecar Saint
podcast, so lots of images inthere and I decided to run a
really quick experiment here.
I said basically, with nofurther prompting, as you can

(12:38):
see on my screen, can you takeall the images and downloads and
move them to a folder calledimages?
I feel like that's a prettystraightforward prompt.
I didn't provide a ton ofdetail and then it started this
lengthy process, which I'vealready run.
Because it took quite a bit?
Because, as I admitted, I havea lot of images in my downloads
folder.
It says I'll help you move allthe images from downloads folder

(13:00):
to a new images folder.
Let me first check if bothfolders exist and create the
images folder if needed.
That images folder did notexist, so it created it and
scrolling down my screen here,you can see it moving all the
image files from my downloadsfolder, which is quite a few,
into a new folder called images,and I confirmed this.

(13:24):
When the whole process was run,I went to my downloads folder
and I saw a brand new imagesfolder there that I had not
created and all the images weremoved in there.
So it did take a few minutes toset up and it took a few
minutes to run, but I think thisis a really neat use case to
showcase.
Also, I did, as a note, ask,ask it to read a PowerPoint deck

(13:46):
in my downloads and it did saythe file size was too big.
So that's on my end.
It can read PDFs.
I also did some experimentationwith Word documents and it
seemed to struggle to read Worddocs, but when they were in PDF
format it could actually readwhat the file said and then
provide a summary to me.
So that is just a quick examplefor how you can test out model

(14:10):
context protocol on your own.
So, amit, why do you think ourlisteners should care about MCP?

Speaker 1 (14:18):
I want to go back to something you mentioned in your
description that I thinkAnthropic popularized when they
announced MCP, which is thisidea of the USB port for AI.
So what is the USB port?
It allows you to connectdifferent kinds of electronics
together in a standardized way.
For those of us that have beenaround a little while,
connecting peripherals tocomputers used to be quite

(14:39):
challenging.
You'd have all sorts ofdifferent types of connectors,
ways of connecting printers andexternal hard drives and
different kinds of peripheralsto your PC or your Mac, and then
the universal serial port orUSB.
Universal serial bus came outmaybe 20 years ago, and various
versions of that have evolvedover time.
And as USB has evolved, it'sgotten faster and more

(15:01):
standardized and most devicesnow work off of USB-C.
In fact, even Apple's latestiPhones devices now work off of
USB-C.
In fact, even Apple's latestiPhones, begrudgingly on their
end, are USB-C compatible, whichmakes them easier to charge and
connect with a variety ofdifferent things.
So why is MCP the equivalent ofa USB standard for AI?
Well, it's a standard way ofconnecting tools and models and

(15:23):
applications together in areally simple way.
So it's a very, verylightweight and simple protocol.
You described it really well interms of what the client and
what the server does.
From a business user'sperspective, what it means is
that AI can take action.
So for a long time on this podwe have separated out the idea
of AI models from AI agents bysaying models do thinking, they

(15:47):
sometimes now are starting to doreasoning, but they can't take
action on your behalf.
And AI agents, on the otherhand, combine models with
behaviors or with actions.
Well, in a way, mcp essentiallyagent enables a desktop app
like Claude to take action foryou.
So now the model essentially isable to gain context of its

(16:11):
environment, understand data,understand documents, understand
you more and more and more fromall these other tools, and can
talk to these other tools to sayhey, I want to create a record
in the CRM, I want to renew thismember in my AMS, I want to
save a file to the file system.
Prior to MCP, you would have tomanually do those things.
Claude or ChatGPT or Geminicould give you advice.

(16:34):
It could say hey, mallory,here's a really great email that
you could send to this memberbecause they complained about
something.
In comparison with MCP, youcould say hey, claude, take a
look at my customer serviceinbox and read, using Zapier or
using whatever.
Read my most recent fivecomplaints.
Tell me the best way to respondto them.

(16:55):
Okay, go ahead and respond tothem with those responses.
And if you enable MCP servers toretrieve messages and send
messages which obviously has itsrisks too you don't want to
necessarily just let the AI runwild with it.
Right, we can talk more aboutthat, but the idea is that the
AI can actually take action onyour behalf.
Now, this is not a newcapability in the general sense

(17:15):
of it.
Developers could previouslystitch together these types of
capabilities, building whatwe've been calling AI agents or
AI systems, but now you, as abusiness user without any coding
skill, can do this.
Claude is Anthropic's AIproduct, and Anthropic is the
company that proposed the MCPstandard as a protocol.

(17:38):
And the good news is, everyother major lab is behind MCP,
including OpenAI.
Openai didn't jump behind itright away.
Google got behind it, a numberof other labs got behind it, a
lot of people started buildingfor MCP and then recently,
openai announced that they wouldfully support MCP.
So you will be able to do whatMallory just demonstrated in
ChatGPT as well.

(17:59):
At the moment, cloud Desktop isthe tool that we use for this
type of functionality, becauseit's the best user interface for
interacting with MCP.
Also, our friends over at GrokGrok with a Q just released the
Grok Desktop, which is primarilya little bit developer-centric.
It's a brand new product, butthat allows you to do MCP stuff

(18:20):
as well.
So there's multiple differentoptions.
If you are a software developer, you can utilize MCP servers
from Visual Studio Code, fromCursor, from Windsor, from other
environments.
So MCP is a standard alreadythat's being adopted by a lot of
different apps.
To me, though, just zooming outto your question of why it
matters for, let's say, you'rethe CEO of an association and

(18:42):
you're decidedly non-technical,you want to leverage AI, but
you're not a programmer.
You don't ever want to be aprogrammer.
How do you use this stuff?
Well, just follow theinstructions that Mallory showed
you.
It might be a little bitinvolved looking, but just get
over the head and go do it,because if you try it out you're
going to be blown away by thecapabilities, and then you
personally know what is possiblewith this MCP protocol.

(19:05):
It's really, really powerful,so I'm excited about it.
I think it's going to open up awhole category of use cases for
business users.
You know, as much as I'm aprogramming geek and love
building systems and all thisstuff, I get more excited about
empowering just the typical userright?
How do we make the typical usermore productive, where their
creativity, their curiosity candirectly lead to better results?

(19:27):
And MCP is a major bridge forthat?
So I'm pumped about that.

Speaker 2 (19:33):
So you mentioned all the major labs, or many of the
major labs, are MCP compliant.
Is that the right term?

Speaker 1 (19:40):
Yeah, I don't know if compliance would imply like
some kind of standardizedtesting and people like
verifying, so I don't know that.
I'd go so far as to say that.
I think they're just sayingthat they're expressing their
support for MCP as a protocol,and so therefore it's.
You know, do you want to likebuild on top of Betamax, right?
Or do you want to build on topof, like, hddvd?
These are old standards forthose of you that aren't

(20:01):
familiar with that died, youknow, over time.
What you want to do is try tobuild on top of a standard
that's going to be around for awhile, and so, with all these
labs behind MCP, it's quitelikely that MCP will see several
years of flourishing activityaround it.

Speaker 2 (20:17):
So and you mentioned I can't do the same with OpenAI
right now, but that I could Iwould have to download another
server from OpenAI to mycomputer.
Or can you kind of explain howthat would work?
Or I already have it?

Speaker 1 (20:29):
So ChatGPT at the moment their desktop client as
of the moment we're recordingthis in kind of late April of
2025, chatgpt desktop does notsupport is not what they would
call an MCP client.
It cannot talk to MCP servers,but OpenAI has expressed that
they plan to add it, so by thetime you listen to this, it may
already be in there, so you needto download.

(20:50):
This is not something you cando through the web browser.
You have to download thedesktop app for your Windows or
Mac machine and Claude supportsit already.
I suspect imminently thatOpenAI is going to introduce
this because there's so muchbuzz around it and, honestly,
it's not that hard to support.
It's a fairly easy thing to doto support MCP from a software
developer perspective, so Iexpect OpenAI will have it very,

(21:12):
very soon.
I'd be shocked if Google didn'tintroduce a Gemini desktop
product.
That has this imminently aswell.
So you'll see it as a standardthing.
Also, cloud one of the things Ilove about Cloud is their user
experience is awesome.
The way they do artifacts isgreat.
Their experience aroundconfiguring MCP leaves a little
bit to be desired.
It's a little bit complex.

(21:33):
That will change very rapidly.
Again, I would urge all of youthat are listening or watching
the Sidecar Sync.
You are on the forefront.
You are doing your best to lookahead and to be part of this
wave.
Don't let that little bit offriction stop.
You Go try it out now.
But I would imagine in the nextfew months these tools will all
have way easier user interfacesfor setting up MCP servers,

(21:54):
much more point and click kindof stuff.
But right now it's a little bitinvolved to get it set up.
But that's a one-time thing.
Once you set it up then it'sback to chat and you can talk to
Claude or whatever the tool isabout what you want.
It can start doing amazingstuff for you.

Speaker 2 (22:09):
And to remind you, to break your brain a little bit,
I did have some error logs whenI tried to do this the first
time.
Looking back, it really is asimple process, but I had just
not done anything like thatbefore and I took the error logs
, pasted those into chat GPTbecause I was over my limit for
Claude for until 6 pm, and thenhad it identify what was going
on and then it provided meadvice of what I could do to fix

(22:31):
it.
So, reminder, you can use AI ontop of AI on top of AI to help
you with AI.

Speaker 1 (22:37):
Yes, and once you install the file system MCP
server that Mallory demonstrated, now Claude has access to
Claude's configuration file.
So Claude can actuallyconfigure his own MCP
configuration file foradditional MCP servers.
So if you, for example, wantedto configure Zapier as an MCP
server, there's a portion thatyou do on Zapier's website and

(22:59):
then you can say hey, claude,here's the documentation from
Zapier on their MCP capabilities.
Here's a URL that I got fromthem with my MCP server
endpoints.
Now set up your own MCP file todo that and Cloud Desktop will
be able to edit, quote, unquote,his own configuration file.
So once you give Cloud accessto a portion of your file system
, you are in good shape to beable to really simplify this.

(23:22):
And by the way, when Iemphasize the word portion of
your file system, as Mallorydemonstrated, it is a good idea
to limit the areas of yourcomputer that you give any tool
access to.
There's really no reason togive Clot or any other tool
complete access to your wholesystem.
It's best to give access justto limited portions of your
system.
This is also a good opportunityto just emphasize that we live

(23:46):
in the wild west of AI.
These are early days.
You are going to see all sortsof tools come out saying, hey,
I've got this great chat desktopapp that supports MCP, install
me, I'm free and maybe it isreally cool.
But be thoughtful about whereyou download software from.
Get software from trustedproviders, so you know.
The major labs are a good placeto start.

(24:07):
I'm not saying third party appsare bad.
I'm just saying that you shouldprobably be thoughtful about
software you install on yourcomputer, particularly if you're
going to enable access to usetools to access your file system
, to access websites that holdimportant data, like your CRM,
for example.
This is all very, very powerful, but be thoughtful about what

(24:27):
you do.
Set up experiments, sandboxthem in thoughtful ways,
encourage your team to play withthem, but also teach your team
about the downside risks of justinstalling whatever random
software.
So, for example, I'm a big fanof what DeepSeek is doing in
terms of research.
I love the fact that they'reopen, sourcing all their models.

(24:48):
I think by the time you listento this, we'll probably have a
DeepSeek R2, like next-genreasoning model out there, and
that's awesome.
Now, at the same time, I wouldnot use DeepSeek's website ever,
because it's inferencing.
It's happening in a countrywhere we don't know what's going
to happen to our data, and thisis not anything anti-China or

(25:09):
pro-China, it's just like I justwant my data to be here in the
United States.
That's personally I feel morecomfortable with it, and that
would be true of really anywhereelse in the world.
And so, at the same time, youknow, I would not want to
download a deep, seek desktopapp because I don't know where
it's doing its inference.
It's probably doing itsinference in China If I'm going
to give it access to my filesystem.
What's happening right?

(25:30):
So I think you have to bethoughtful.
I'm not saying to be paranoid,but I'm saying be thoughtful
about apps you download and howyou give them access to
increasingly powerful tools inyour environment.
Using a web browser likechatgptcom, there's a limit to
what it can access, but adesktop app can access a whole
lot more.

Speaker 2 (25:50):
That's an excellent point, and it brings me to the
question, amit, if we have anassociation leader listening to
this podcast, should they beincorporating MCP protocol into
AI guidelines or usage and then,on the flip side, should
association staffers be askingfor permission to do this?
I mean, what's your take onthat?

Speaker 1 (26:11):
Yes, I think so.
First of all, if you're astaffer and you don't have
access to things that are inyour association's tech stack,
like your AMS probably doesn'thave an MCP server and may not
for a while, if ever, and someof these tools are a little bit
long in the tooth and so theymight not have support for these
kinds of things.
You know, a little bit long inthe tooth and so they might not
have support for these kinds ofthings.
That's okay.
You can still experiment withother things, like the file
system, but you know, do it in.

(26:32):
When I say sandboxing, I meanset it up in an environment
where you're not sharing supersensitive data.
Start off with like little testcases, like the downloads
example of images and stuff likethat.
Those are benign.
You know, even if those filesdid get uploaded to a server
that you don't want themuploaded to, it's not a big deal
.
So be thoughtful.
Test things out as anassociation.

(26:53):
Yes, you should absolutelyupdate your AI policy to
accurately depict how you wantMCP to be handled.
It's a really important thingto educate every employee on.
With the Sidecar Learning Hub,we are working on some
additional lessons right now toadd to the Learning Hub's
content exactly on this topic tohelp provide guidance to all of
our learners, and I think youknow the thousands of folks that

(27:15):
are going through that contentare going to be able to, you
know, quickly take advantage ofthat, which is great.
We will also publish a lot ofpublicly available content on
the Sidecar blog to help driveguidance on.
You know, ai policy and MCPgive you some templates, so it's
really really important to bethoughtful about this, because
you will have users in yourorganization who want to
experiment with this.
Don't stomp on them, don'tprevent them from doing it.

(27:38):
Just give them some thoughtfulguidance on what they should be
doing and what you'd like themto not do, just like with AI
tools in general.
It raises the stakes in termsof security, but it also raises
the possibility so much higher.
You should be excited, really,really excited, but you should
also be very thoughtful about it.
Now I do want to say a couple ofthings about the world of

(28:00):
associations and kind of theunique data architectures that
many associations have, whichoftentimes consist of.
They might have mainstreamsystems like Microsoft Office
365, google Slack you mentionedearlier things like that Great
and those tools are mainstream.
They're big companies.
If they don't already supportMCP, they will very, very soon.

(28:23):
Right?
That's an awesome piece of news.
But you probably have somesoftware in your stack like an
AMS, an LMS, maybe anassociation or
nonprofit-specific financialmanagement system.
Maybe you have a contentmanagement system that's been
tweaked, maybe you have somecustom applications that have
been written, and these productslikely will never support MCP,
or not anytime soon.

(28:45):
Even if you're on a verycontemporary AMS, for example,
the vendor behind it probably isa much smaller company than one
of the labs we're talking about, right, and so they're going to
take some time to develop thiscapability.
So what do you do?
How do you get your world ofdata unified into the world of
AI?
From an MCP perspective, what Iwanted to mention there is this

(29:06):
is the same challenge you havewith unifying your data or
accessing your data for anyother kind of AI application.
Mcp is just going to put morepressure on you to figure this
out, because your users aregoing to say, hey, if I have MCP
access to my AMS data, oh, mygosh, mallory, what I could do
is automate this and this andthis and this, and it would be
amazing.

(29:26):
Can I please have access to itand Mallory, who's the CEO of
the association, says hey, I'mreally sorry, my AMS doesn't
support it, there's nothing Ican do.
Or is there?
And what Mallory can do is shecan implement an AI data
platform, which is really justanother fancy term in the world
of technology for anotherdatabase, but a database you
control.
It can be literally anythingyou want it to be.

(29:52):
It doesn't have to be MemberJunction, which is our free,
open source AI data platformthat everyone in the world is
able to download, install andrun on their own for no cost.
That's why we built it, but youcan use anything for this.
So you have an AI data platformthat's continually ingesting
data from your legacy systems,your AMS, your LMS, your FMS and
legacy might be actually aharsh word, it sounds negative,
but it just means a system thatdoesn't necessarily support all

(30:12):
of the latest standards, right?
So you have all the dataflowing into this unified
database and then, if thatsystem supports MCP, now you
have, at a minimum read accessto all of your data across your
whole enterprise.
So what if your AMS and CRM andLMS data was all in one unified
database and now you stand upMCP support on that database,

(30:33):
which, by the way, the latestversion of Member Junction gives
you again for free.
It gets every single piece ofdata.
In a Member Junction AI dataplatform Instance, you have an
MCP server just built in rightout of the box.
So Claude knows all about MCPserver protocol and knows that
member junction in yourenvironment supports MCP for all

(30:55):
of the data types in your AMS.
Claude can now access whateveryou enable accessing members,
figuring out who's renewed, whohasn't, getting a list of people
that are about to renew,pulling that list down,
generating personalized renewalletters for each of them and
then connecting to Zapier toconnect to your HubSpot to send
individualized messages to everyone of those people.
I mean, these are the kinds ofthings you as a business user

(31:16):
can do if, and only if, youliberate your data from the
confines of a proprietary prisonof data, which is what a lot of
these legacy systems are, andyou can do that.
You have the power to get yourdata out and unify it in a data
platform.
We've been talking about thisfor years now, and we've been
talking about it in the contextof building agentic AI
applications, doingpersonalization at scale.

(31:39):
What I love about MCP is itopens up the door for business
users to do what their mindcomes up with, but you have to
have your data available inorder to make that possible.
So having your data wire intoan AI data platform once again,
this is probably the biggest usecase of why AI data platforms
can be such a massive immediateROI.
You get your data in there andthen you can enable your users

(32:01):
to have all sorts of amazing usecases like the one I just
described.
Just as one fairly smallexample, actually, yeah.

Speaker 2 (32:09):
Wow.
So basically, we have a GenTech applications at our
fingertips, but it's all relianton the data.

Speaker 1 (32:48):
But if you then combine that with having all of
your data, I mean the worldopens up.
Train ChatGPT to say, hey, thisprocess I just did where I
grabbed the list of about torenew members, I pulled them in,
I generated a custom renewalreminder, I sent it out through
HubSpot.
I want you to call that myrenewal process and I want you
to remember it and then in thefuture I can say, hey, by the
way, I just want you to run thisfor me once a month.

(33:09):
That does not exist yet.
Just to be clear, I'm makingthis up as I describe it, but it
is so easy to see that futureunfold where these
consumer-grade AIs willabsolutely allow you to run
recurring tasks.
Chatgpt already does have theconcept of scheduled tasks in it
, by the way, but it's reallyweak right now.
All it does is it reminds you,saying hey, mallory, it's time

(33:29):
to run your renewal process.
But what if you could say, hey,this thing I just did here.
I want you to justautomatically do this for me
once a month and send me asummary of what happened.
That's going to be a thing Now.
Will you want to do it that way?
At the enterprise scaleProbably not, but that's a
fantastic way to prototype ideas, and then you could scale them
up on an AI data platform orhowever you want to do it.

(33:50):
So I think it just opens up thedoor for business users Again,
I keep hammering that termbusiness users to be able to
come up with new processes soquickly and with such incredible
power.
You know, I just get superexcited thinking about it.

Speaker 2 (34:06):
On that note I did immediately.
My mind went to HubSpot when Iwas doing some research on MCP
and it looks like they don't yetsupport MCP, but that they will
soon.

Speaker 1 (34:18):
So what you just described will surely be
available within the next fewmonths.
And you can actually do itright now because HubSpot has a
lot of endpoints in Zapier andZapier enabled MCP for all 8000
systems that connect to Zapier.
So but yeah, hubspot SalesforceNetSuite you know, sage, like
all these major vendors, I willbe willing to guarantee you,
certainly by the end of the year, but probably much, much sooner
for the big ones they're goingto have MCP server support

(34:40):
directly in their software.

Speaker 2 (34:44):
Moving on to topic two for today we want to discuss
Anthropics Economic Index,which is a data-driven
initiative launched by Anthropicto systematically track and
understand the evolving impactof AI on labor markets and the
broader economy.
The index is based onlarge-scale analysis of
anonymized interactions withAnthropix's Claude AI platform,

(35:05):
offering a unique real-timeperspective on how AI tools are
being incorporated into actualwork tasks across a wide range
of industries.
Tools are being incorporatedinto actual work tasks across a
wide range of industries.
Their second Anthropic EconomicIndex report analyzes data from
CLAWD 3.7 Sonnet usage.
Specifically, this reportprovides fascinating insights
into how AI is actually beingused across different industries

(35:26):
and professions.
So since launching CLAWD 3.7Sonnet, anthropic has observed
some key trends.
First, they've seen increasedusage in coding, education,
science and healthcareapplications.
This suggests AI adoption isexpanding into more specialized
and technical fields.
Second, the new extendedthinking mode, which we've

(35:48):
covered on the pod before, ispredominantly used for technical
tasks.
Computer science researchersused it most nearly 10% of their
interactions followed bysoftware developers about 8% of
their interactions and creativedigital roles like multimedia
artists, and then followed bygame designers.
This indicates that complex,deep-thinking tasks are where

(36:08):
users see the most value in AIassistance.
The third trend.
The report breaks down howpeople interact with AI across
different occupations.
They found that copywriters andeditors show the highest amount
of task iteration, where humansand AI collaborate to write and
refine content together.
Meanwhile, translators showamong the highest amounts of

(36:29):
directive behavior, where themodel completes the task with
minimal human involvement.
Amounts of directive behaviorwhere the model completes the
task with minimal humaninvolvement.
Perhaps most interesting is thebalance between augmentation
and automation across industriesTasks in community and social
services, including educationand counseling approach 75%
augmentation, while tasks incomputer and mathematical
occupations are closer to a50-50 split.

(36:50):
No field has tipped intoautomation-dominant usage just
yet.
The bigger picture finding hereis that AI is primarily being
used to augment human work about57% of usage rather than
automate it.
Even in fields where AIperforms well, humans remain in
the loop, especially forcreative and strategic tasks.
My question for you, amit doyou find any of these insights

(37:13):
surprising?
Do these feel right in linewith what you've been feeling
anecdotally?

Speaker 1 (37:19):
I am not surprised by this.
I think that the augmentationstory is the really exciting
story to focus on.
I do think certain things will100% be automated as these
models get even slightly moreintelligent or slightly more
reliable.
But augmentation is the bigstory because it's about having
a bigger pie rather than sayingthe percentage of the pie taken

(37:39):
over by AI is increasingly large.
So it's an abundance storyrather than a story of missed
opportunity or AI taking overthe world.
So what you see here is more isgetting done.
More exciting things arehappening.
Let me give you a quick littleexample as an anecdote to
illustrate this.
So, with CLOD 3.7, sonnet,which is the model this research

(38:01):
was based on and you could saythe same thing, by the way, for
Gemini 2.5 Pro or GPT 4.1 or abunch of other really cutting
edge models, we found them to beso good, and CLOD 3.7
specifically to be so good that,using a new tool, one of our
development teams is working onthis really sophisticated
learning content automationproject, and what it is
essentially is this ability tocompletely automate the

(38:24):
generation of asynchronouslearning content for our LMS.
So if you imagine, like theSidecar AI Learning Hub has
seven different courses, each ofwhich has, you know, somewhere
between five and 15 lessons.
Each lesson consists of a bunchof slides and audio and video
and demos and stuff.
How do you continually updatethis stuff and automate the
regeneration of this content sothat it doesn't require Mallory

(38:47):
or me or one of our othercolleagues at Blue Cypress?
You know literally recording itthe way we've recorded the
content historically.
So we've automated that using amixture of different tools and
recently we migrated fromSharePoint to a different file
provider or file system providercalled Boxcom for this
particular project, because itjust has far superior workflow

(39:08):
and better version control andSharePoint's fine for a lot of
things we use across theenterprise.
But we needed a much more robustfile management system for the
sidecar team in order to addressthis particular use case.
So we did not have support forBoxcom in our software.
In our software stack, which isbased on Member Junction, we
had support for local filesystems, sharepoint and the

(39:31):
major cloud providers Google,azure, et cetera but we didn't
have Boxcom built in.
Literally within minutes, oneof our team members was able to
get in, get cloud code to builda complete Boxcom implementation
and while they were at it,they're like oh, let's just
knock out Dropbox as well.
Oh, let's do Google Drive too,and maybe a couple of others.
So there was like four or fiveother like file provider type,

(39:52):
you know, cloud storageproviders that got implemented
in member junction as juststandard drivers or providers in
that software, each of whichwould have probably taken a
developer anywhere from two tothree days to build and test,
were done literally in minutes.
And so that's still augmentation, though, because the developer
who was doing it had to reviewit and make sure it worked and
still test it and all of that.

(40:13):
But it's a force multiplier,right, because that would have
taken a team of four or fivedevelopers a month to release a
feature of that significance,and now it's just done, right.
So the speed at which you cango is really really crazy, I
think.
Going back to the economicimpact, to us what that means is
there's more value createdrather than, oh, we've reduced

(40:34):
labor requirement, right, as apercentage of the total pie.
Ai probably did 95% of thatwork, but that work wouldn't
have gotten done, or wouldn'thave gotten done certainly by
now, if it wasn't for thatcaliber of AI.
So, really, what I'm sayingessentially is what the report
says has been my experiencepersonally as well my experience

(40:55):
personally as well.

Speaker 2 (40:55):
I'm wondering if you feel there's a missed
opportunity on the automationfront.
I know you mentioned maybewe're just not quite there yet
in terms of being able toautomate all these processes,
even the example you gave withMCP on the previous topic.
But do you feel there areopportunities to automate?
But we're just kind of focusingon the augmenting piece.

Speaker 1 (41:10):
Well, I think the question is is what's the bar
that you have to pass in orderto feel comfortable with
complete autopilot for a givenprocess, right?
So this is an area where,understandably, most association
leaders have a degree ofconcern handing over the reins
to an AI to completely makedecisions on a variety of things
, whether it's customer support,outbound marketing whatever the

(41:31):
case may be or, for what Idescribed, coding, and I think
that's appropriate to have whatwe often call human in the loop
in an agentic workflow wherethere's some expert human
reviewing the work that the AIproduced.
I still think we're at thatphase.
The question is really like arewe being overly cautious?
Are we being overlyconservative?
Is the AI already better thanthe average human in a

(41:54):
particular domain, and do wehold the computer to an
unreasonably high standard interms of what level of
perfection it has to achieve toachieve quote unquote full auto
ability?
And I actually think that'sprobably true Now to some degree
.
It's probably still fair,because the downside risk of
screwing up external face-incommunication is pretty high.

(42:14):
At the same time, there's waysto mitigate that right when you
can tell people hey, listen,this is an AI response.
Be transparent, say it's notalways perfect.
Please let us know if there'sanything that's off.
We have humans in the loop thatare available at all times to
help you out, but we'reoptimizing for being super
responsive at a scale we can'tdo with human-only responses,

(42:35):
right?
So I think there's ways tobalance that out.
My general view is that I thinkpeople are going to get more
and more comfortable withautomation complete automation
over a period of time.
The first use case in AI that Ireally was pushing on years and
years ago was personalization,and with our AI newsletter
product Rasa, the first thing wewere doing was actually super

(42:57):
simple.
We were simply saying hey,association folks, let the AI
choose which articles Malloryshould get in her newsletter
versus which articles Amitshould get in his newsletter.
So it's the same newsletter.
The overall structure of thenewsletter, the copy, a lot of
it's the same, but justinserting different articles.
So if I have 30 or 40 articlesthat might be good to send out

(43:19):
to people for a given issue,each person gets the articles
that may be of greatest interestto them, right?
It's a simple concept.
Even that, though, people weretotally freaking out about, like
eight years ago when we starteddoing this, and even five years
ago and some people are superuncomfortable with that today,
freaking out about like eightyears ago when we started doing
this, and even five years ago,and some people are super
uncomfortable with that today.
But most people, once theystart letting the AI pick the

(43:40):
articles on a per individualbasis, totally forget about it
after about like an issue or two, because they're like, wow, our
members love this, it's totallyperforming and I can't keep up
with this thing.
Like, the AI is at a scale Ican't keep up with and we're
about to release a newcapability with Rasa where it'll
actually write a lot of thecopy and summarize articles and
it'll also write like openingparagraphs for marketing emails

(44:00):
with the new campaigns productwe're launching.
And right now people are goingto be a little bit concerned.
They're going to say, hey, I'msending out 30,000 emails to
30,000 members.
Do I really want to have the AIwrite a call to action
paragraph that asks them to cometo our annual conference, that
takes into account their youknow, their greatest areas of
interest and all stuff?
Because I can't read 30,000CTAs and some people will say no

(44:22):
and some people will say yes.
We're ones who are saying yes,we're about to roll this out for
digital now.
So those of us, those of you20,000 plus folks that are on
our mailing list are going tosee digital now invites that are
hyper personalized for you inthe next 30 days or so, uh, and
you can tell us what you think.
But we're, you know, it's ourjob to experiment, maybe a
little bit past the line of whatsome folks are comfortable with
.
I think very soon, though, mypoint is, people will become

(44:45):
very comfortable with that andthey're just going to assume oh
yeah, of course, yeah, I canwrite that opening paragraph.
That's no big deal.
So our comfort level, or likeyour experience with Waymo,
right, like the first time youexperienced like, oh my God, I
am not sure this is going to becomfortable.
And then like, literally withinone or two rides, like, wait a
second, this thing's way morereliable than the average Uber
driver.
So I think automation is goingto happen.

(45:06):
Like full automation is goingto happen very quickly, like way
faster than we expect.
The tipping point happens andall of a sudden, there's just
there's not even like a conceptof looking back.
I think we're within the nextyear.
That's going to happen in manydomains.

Speaker 2 (45:21):
As an aside, I've heard this from the Rasa team to
your point about how eightyears ago the association space
was really scared of thisconcept, that back in the day
they wouldn't even lead withartificial intelligence.
The Rasa team didn't want toinclude AI like forefront on the
website because that was reallyscary to people, which I think
is crazy.
Right, because now it's like wehave AI, this AI powered, this

(45:42):
feature, and also on the Waymofront.
I just had a memory pop upwhere I was about to enter the
Waymo and I had this momentwhere I literally out loud said,
oh, I'm scared, and then thesepeople walking by us on the
street laughed.
We're like, oh, she's scared ofthe Waymo, you know.
But you're right, once I got inthere, once I got more
comfortable with it, I actuallypreferred it to the human driver
.

(46:02):
I want to talk about one morething.
Amit, that was a reallyinteresting example you shared
with Boxcom and the learningcontent automation.
Would you call that and I knowwe'll probably cover this in a
full topic one day on the podwould you call that a version of
vibe coding that we've kind ofheard thrown around in recent
times?

Speaker 1 (46:21):
Yeah, you know, I'm glad you brought that up, and so
that term, I believe, wasoriginally coined by this guy,
andre Karpathy, who's one of themost amazing AI researchers on
the planet.
Have tons of respect for himand, I think, for people of that
level of capability, the ideaof vibe coding it's basically
this.
For those of you that haven'theard it a lot, it's this idea

(46:41):
that you have a person sittingwith a tool like a cursor or a
Visual Studio Code and justhaving a chat with the AI and
the AI is doing all the codingand you're kind of going kind of
at this speed.
That's somewhat super human interms of how it feels.
You're not really looking atthe code or doing the coding
yourself, you're just you'recoding in the sense that you're
talking to the AI and it'sbuilding a program for you.

(47:03):
And I love the concept in thesense that you're taking
advantage of AI, particularlyhelping people who aren't
necessarily professionalsoftware developers to be able
to have an unlock and be able tobuild things.
What I do not like about it isI think it kind of it makes it
seem like you can kind of handover the reins for building
software to the AI by itselfalmost, which we're not quite

(47:26):
ready for for a few reasons, oneof which is that, as cool as
this stuff is to rapidly buildsoftware, if you just kind of
hand the reins over to AI andyou have no idea how the
software was constructed, the AIis going to essentially build a
different version of it everytime you ask it for a new
program.
So you can say, hey, I wantthis particular tool to be built

(47:46):
One day.
It'll build it one way, thenext day it'll build it another
way, and, yes, you can give itsome rules and stuff.
But as a non-developer, if youdon't have anybody helping you
with this, you can sometimes getthings that make a lot of sense
and sometimes get absolutegarbage, and now it might
actually be functioning garbage.
But when I say it's garbage, itsounds like how can you have
functioning garbage?
Because the functioning garbageessentially is software that's

(48:08):
either really inefficient, isbrittle, uses techniques that
are not secure, exposes you torisks that you don't want, and
you don't necessarily have anyidea about this.
So I don't think there's anyfuture where professional
software architects anddevelopers aren't involved in
this at some level.
Maybe they're AI at some point,but the term vibe coding, I

(48:30):
think, makes it seem to people,like you know, they can just go
build whatever they want at theenterprise level, I think, for
personal tool use and for usingMCPs this.
So I think that vibe coding asthe category does fit into the
context of augmentation if it'sdone right, and in fact my

(49:01):
example earlier, you could say,is a form of vibe coding right,
where you had a professionalsoftware developer building, you
know, some significantfunctionality and cloud code.
The CLI tool did that, but itwas reviewed by a team of
experts to make sure that it wasbuilt in a certain way where it
fits into a very structuredframework.
It's secure, it's reliable,it's built in a robust manner.

(49:22):
Last thing I'll say about thatis AI will often rebuild code
that already exists because ithasn't really thought of the
fact that, oh well, we alreadyhave a piece of code that does
this particular thing.
Because it's so fast at buildingsomething, it doesn't
necessarily do a great job ofreuse, which reduces the
resilience of the software,makes it unnecessarily complex.
So there's a lot of things youhave to watch out for.

(49:43):
But again, my goal in thesecomments about vibe coding isn't
to cast a shadow saying hey,don't do it.
Just don't think of it as the100 percent solution.
It's much more of anaugmentation scenario.
So it very much supports whatyou're just saying, but it's
just something that you shouldalso tie back into some kind of
professional review process.

Speaker 2 (50:03):
Okay.
So I think what you're sayingis keeping that human in the
loop is important.
I know we covered, probably afew months ago at this point on
the pod, that the CEO ofAnthropic, dario Amadei, said
that in a year from that time,every single line of code would
be generated by AI.
So if that's the case, we canvibe code, but it's important to
have a human in the loop thatcan review that and kind of

(50:25):
address all the issues that youjust mentioned.

Speaker 1 (50:27):
Yeah, or maybe another AI in the loop that
reviews it, and this other AI ismore deeply trained on your
approach to software developmentand that can be set up once and
then reviewed.
So there's kind of levels ofabstraction that we can get to
that will help us furtherautomate.
I agree with Dari Amadei onthat statement.
I think all code will beautomated at some level.

(50:47):
But where we are right now,where I'm simply trying to
highlight both an excitementthat is there for us and I hope
everyone shares, but also a bitof caution, because at the
moment, as good as these toolsare at building working software
, they can absolutely buildsoftware that creates problems
for you that you don't realizeright.

(51:07):
So you have to be verythoughtful and there's a lot
more that goes into it than justkind of hanging out with you
know, download Cursor orWindsurf and just start talking
to it and see what happens.
It's an awesome thing to go tryout, but don't press the button
to take it live on your websitewithout you know someone that
has some expertise in thischecking it out for you,

(51:28):
absolutely, absolutely.
Last thing I want to justquickly say on that is I do
think people look at the trendline and need to understand this
means that software developmentis going to become extremely
accessible to all associations.
A lot of associations have saidhey, we don't want to build this
custom member application on mywebsite because it's expensive
to maintain, it's expensive tobuild.

(51:50):
I don't have to deal with that.
Let's just use the standardout-of-the-box tool that the
vendor provides in the AMS, andit's a little bit less efficient
for my members to use.
But it's good enough.
You can re-evaluate thatposition, right, you can think
about it from the viewpoint ofwell, but maybe I can have a
vendor help me out.
But it's going to takeone-tenth the time to build it,

(52:10):
so my costs are going to belower.
But also, if I need to maintainit, the cost of it is way, way
lower too.

Speaker 2 (52:17):
I'm sure we'll be keeping a close eye on vibe
coding in the coming months andyears, even though you hate the
term, Not the actual concept,but the whole phrase.
Vibe coding it is a little bitI don't know.

Speaker 1 (52:28):
I don't know.
It just seems like somethingout of an episode of Dumb and
Dumber or something.
It just seems like somethingout of an episode of Dumb and
Dumber or something.
Like you know.
It's more of just.
It just sounds weird to me, butI'm a little bit of a
curmudgeon, I guess.

Speaker 2 (52:37):
Coding on vibes.
I think it's because the wordvibes is maybe popular with the
youth nowadays, so that might bewhat you're associating it with
.
Yeah, probably.
Well everybody.

Speaker 1 (52:52):
We hope you all have great vibes for the rest of your
week and we will see you allnext week.
Thanks for tuning into SidecarSync this week.
Looking to dive deeper?
Download your free copy of ournew book Ascend Unlocking the
Power of AI for Associations atascendbookorg.
It's packed with insights topower your association's journey
with AI.
And remember, sidecar is herewith more resources, from

(53:13):
webinars to boot camps, to helpyou stay ahead in the
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.