All Episodes

November 21, 2024 35 mins

Send us a text

In this episode of Sidecar Sync, hosts Amith and Mallory explore whether prompt engineering is becoming obsolete with advancements like Stanford’s DSPY and Zenbase, and discuss OpenAI’s upcoming Operator agent, which promises to revolutionize how AI integrates with workflows. From understanding effective communication with AI to weighing the risks and rewards of emerging AI agents, this episode is packed with actionable insights for professionals navigating AI-driven transformations.

🔎 Check out the NEW Sidecar Learning Hub:
https://learn.sidecarglobal.com/home

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecarglobal.com/ai

🛠 AI Tools and Resources Mentioned in This Episode
Stanford’s DSPY ➡ https://stanford.edu/dspy
Zenbase ➡ https://zenbase.io
OpenAI Operator ➡ https://openai.com
ChatGPT ➡ https://openai.com/chatgpt
Claude ➡ https://anthropic.com

🔎 Check out Sidecar's AI Learning Hub
 https://learn.sidecar.ai

📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/

👍 Like & Subscribe!
https://x.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecar.ai/
https://www.linkedin.com/company/sidecar-global/

Amith Nagarajan is the Chairman of Blue Cypress, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

https://linkedin.com/amithnagarajan

Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.

https://linkedin.com/mallorymejias

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Amith (00:00):
I think this is where people are going to get their
first taste of automation, butwhen the model starts weaving
its way into other aspects oftheir workflow, working for them
in various ways, that's goingto be, I think, a pretty
interesting kind of generalpurpose capability that will get
exciting.
Welcome to Sidecar Sync, yourweekly dose of innovation.

(00:22):
If you're looking for thelatest news, insights and
developments in the associationworld, especially those driven
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.

(00:42):
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host
.
Greetings everybody and welcometo the Sidecar Sink, your home
for all the content you canpossibly stand related to
artificial intelligence andassociations.
My name is Amit Nagarajan andmy name is Mallory Mejiaz.
And we are your hosts and wecan't wait to get into another

(01:04):
exciting episode with twofantastic and super timely
topics in the world of AI andassociations.
But before we do that, let'stake a moment to hear a quick
word from our sponsor.

Speaker 3 (01:16):
Introducing the newly revamped AI Learning Hub, your
comprehensive library ofself-paced courses designed
specifically for associationprofessionals.
We've just updated all ourcontent with fresh material
covering everything from AIprompting and marketing to
events, education, data strategy, ai agents and more.

(01:37):
Through the Learning Hub, youcan earn your association AI
professional certification,recognizing your expertise in
applying AI specifically toassociation challenges and
operations.
Connect with AI experts duringweekly office hours and join a
growing community of associationprofessionals who are
transforming their organizationsthrough AI.

(01:58):
Sign up as an individual or getunlimited access for your
entire team at one flat rate.

Mallory (02:10):
Start your AI journey today at learnsidecarglobalcom
Amith, how is it going today?
We haven't recorded a podcasttogether in a couple weeks here.

Amith (02:18):
Yeah, it's been a bit of time, so I'm doing great.
How about you?

Mallory (02:21):
I'm doing really well, have some big news on the
personal front that I wanted toshare with our Sidecar Sync
listeners and viewers that I gotmarried on November 9th Not me
forgetting already November 9thand then I went on my honeymoon
with my husband to Mexico City.
So that's where I've been forthe past week or so.
Just got back into the officeyesterday.

(02:43):
We had a fantastic time.
The wedding was great.
That was back in New Orleanswith all our friends and family,
and then we had truly anexciting, amazing few days in
Mexico City as well, eating someof the best food ever.
I will say, amit, I don't knowif you've been to Mexico City,
but the food is fantastic.
Well, I was in Mexico City oncewhen I was a kid, like

(03:20):
single-digit years.
I don't remember exactly when,but I days where we didn't think
about anything else.
But I'm happy, I will say, tohave the wedding behind me.
As fun as it was, it was quitea feat to plan so many little
details and when we're alreadydealing with events, you know,
in the world of sidecar it'skind of a crazy thing to also
have personal events going onsimultaneously.

(03:41):
So happy that November 9th isbehind me.

Amith (03:45):
Yeah, it's pretty amazing .
You managed to pull off DigitalNow and then, a couple of weeks
later, got married.
And if my planner brain waskicking in if I was doing
something like that, I might say, hmm well, I wonder if I can
combine the two and get a betterdeal.

Mallory (04:01):
No, no, no, you're hearing it right now.

Amith (04:04):
We could have had a session just for you guys at
Digital Now.

Mallory (04:08):
That might sound like my worst nightmare, meath.
You don't expect our firstone-year anniversary to be
combined with Digital Now inChicago.
It might have been a good dealto save, but what I will tell
you is that it's all relative,because so many people, so many
of my friends, said, oh, awedding must be a lot of details
to take care of.
But I said, well, after puttingon digital now, honestly the

(04:30):
wedding was easy.
Like I hate to say that.
I'm sure for some people it isnot easy at all, but not having
to coordinate all these sessionsand speakers, honestly it was
one night, a few hours.
So I would say it's allrelative there with the event
stuff.

Amith (04:48):
Now, mallory, the most important question in the
context of this podcast I canask you about that is did you
use AI at all to help you withyour wedding?

Mallory (04:55):
Oh man, I was not expecting this question.
I don't know that we did.
I hate to put it out there.
I don't know that we used anyAI.
My husband is also an avid AIuser.
He loves a good chat, gpt andClaude, so maybe he did on his
end.
We did write our own vowels,but I explicitly stated we were
not allowed to use any AI modelsto write our own vowels.

(05:17):
So allegedly we should be goodthere, but no, I don't think we
really did in the planningprocess.
What a shame.

Amith (05:25):
But no, I don't think we really did in the planning
process.
What a shame.
Well, maybe not.
You know, there are definitelydomains where it makes sense
because you want to really kindof get into all the details, and
so that's.
I think perhaps the opportunitywith AI is that which makes us
uniquely human, is what we wantto focus on Right, where we get
to spend more of our timebecause AI takes care of a lot
of the other things.
So interesting, that's true.

Mallory (05:43):
More of our time because AI takes care of a lot
of the other things.
So interesting, that's true.
That's a really good segue intoour first topic of the day,
which is kind of a hard questionto ask Is prompt engineering
dead?
And then we're going to followthat up with topic two of the
day, which is OpenAI's operatoragent that should be released
soon.
So the first question is promptengineering dead?
We've obviously all seen howimportant prompt engineering has

(06:06):
become in the AI world, and ifyou're not familiar with that
phrase, prompt engineering it'sthe art of knowing exactly how
to phrase your requests to getthe best results from AI models.
It's become a skill that somepeople have built entire
businesses around, so prettyimportant within the last few
months, few years.
But now we're seeing somethingfascinating the emergence of

(06:26):
tools that could automate thisprocess.
Tools like Stanford's DSPY andstartups like Zenbase are
developing systems that use AIto optimize prompts
automatically.
These tools can test thousandsof variations, learn from
successful outputs andsystematically improve results
without human intervention.
So, instead of manuallycrafting prompts, developers can

(06:49):
now show these systems examplesof good outputs and the AI will
reverse engineer the optimalprompt.
So this raises interestingquestions about the future of
prompt engineering.
If AI can optimize its ownprompts.
Will human prompt engineeringbecome obsolete, or will it
evolve into something new,perhaps focusing more on

(07:10):
defining what good looks likerather than crafting the perfect
instructions?
So, amit, this is aninteresting question for us at
Sidecar because, as you all know, we have an AI prompting course
.
So I hope, in this moment,right prompt engineering is not
totally dead.
What are your initial thoughts?

Amith (07:27):
Well, so we do have our AI prompting course for only $24
.
People can jump on and checkthat out.
It's a great introduction to AIand the reason we still have it
up there is because it's stillrelevant.
And prompting is something that, as like a defined discipline,
if you will, probably won'texist.
But will you be able to?

(07:47):
Will you need to be able tocommunicate effectively with AI?
The answer to that question isabsolutely yes.
But what's happening with AIand prompting is the same thing
that's happened with computingsince the very beginning.
You know, computing started outwith very esoteric and difficult
to use ways of communicatingwith computers.
You know the first computersyou programmed with like

(08:07):
physical switches, and then abig innovation came out and
there were punch cards.
And then after that there were,you know, some basic ways of
inputting text and programming.
Languages evolved, but theystarted off very low level,
super, super cumbersome to usewith things like assembly
language, and then we went intohigher and higher levels of
languages.
So if you think about thehistory of computer languages,

(08:29):
you know back in the 60s we hadsomething called BASIC, which
was the first kind of sort ofhuman readable computer language
.
That really took, you know,significant market share and
people started using thatweren't like deep computer
scientists, and that progresshas continued and continued and
continued, and computerlanguages have continued to get
easier to use and smarter.
What the big deal back inOctober November 2022, when the

(08:53):
chat GPT moment occurred, wasthat the general public first
had a taste of what it's like tocommunicate at a pretty high
level of resolution with acomputer through natural
language right, and so thatfirst interaction was a major
step change for everyone to sayhey, wait, you can actually talk
to these things without knowinghow to code or without knowing

(09:13):
the correct sequence of clicksin a particular application.
You could just literally typesomething and get a response.
Now, of course, the firstversions of that were quite
rudimentary, and so our promptskills were super important,
because if you were good atprompting the computer or really
the AI model I should say wascapable of producing a useful
output.

(09:33):
But in the earlier days, if youdidn't have really good
prompting skills, it would bedifficult for you to get a whole
lot of value.
A great example that actuallyis with imaging models or image
generation models.
You know, using the earlyversions of Midjourney through
Discord, using the earlyversions of DALI, were very,

(09:53):
very difficult to get anythinguseful.
And then when OpenAI integratedDALI into ChatGPT I think that
was about a little bit over ayear ago they actually did kind
of a meta-prompting layer thatwhat you just described where
you say, hey, I want an imagethat describes you know two
people getting married in NewOrleans and then flying off to
Mexico City for their honeymoonand you'd get something if you
prompted the original DALI withthat.
But what happens now is ChatGPTtakes that and goes through a

(10:16):
little bit of what I'd call kindof meta reasoning or meta
prompting, where it then sends aprompt to the DALI model that
was actually written by ChatGPTand all of it to you is totally
transparent.
But if you actually click onthe image you can see the
prompting that's going on behindthe scenes.
What you're describing, mallory,is just more of that.
It's going to be more and moreand more of this higher order

(10:37):
stuff and the models themselvesare just going to become smarter
.
So if you kind of look at thatprogression that I've tried to
paint the picture of, from theearliest days of competing
through now, we have hadessentially higher and higher
taxes that we paid early on interms and tax in the sense of
metaphorical tax in terms ofpain or knowledge requirements

(10:58):
to interact with the computer.
To now, almost nothing.
Right.
It's super fluid and it'sgetting better.
So I would actually say that themost important thing that you
have to really look at if youwant to know where the puck is
going to be, so to speak, iscommunication skills.
So if you want to be effectivein getting what you want from an
AI, you have to be good atarticulating yourself.

(11:18):
Just like if I go to anemployee and I say, hey, I
really want you to produce thisproject for me and I do kind of
a lousy job of defining therequirements and I don't really
provide a lot of detail or I'mkind of ambiguous in the
language I use you might not getsomething super great right.
And with AI it's the same thing.
I think AI systems are going tobe better at coming back to you

(11:38):
and refining the knowledge orthe ideas with you and then
executing on them.
But that's my two cents.
I don't think prompting as itsown skill is like something
people are going to be talkingabout five years from now.
But I do think understandinghow to communicate with AIs is
going to be pretty important,and the AIs are going to get
better at understanding ourfairly bad communication skills

(11:59):
generally as a population, right?
So I think that what's going tohappen is there will always be
people who are at the frontierdoing really cool things, that
have the greatest skills in thisarea.
Right now, what I would say iszooming back to late 2024, where
we are today as we record thisepisode.
Prompting is super important.
You have to know the basics,you have to understand what you

(12:21):
are trying to do and put it interms that the AI can understand
.
Well, you'll get so much moreout of it, which is going back
to the earlier discussion, whywe have an AI prompting course
in the fall of 2024.

Mallory (12:33):
So, then, it's safe to say that we're all heading in
the direction of prompting beingbaked into these models for us.
But if you want to move quickly, if you want to move right now
on this, prompt engineering isstill an important skill.

Amith (12:45):
That, if you're, better at prompting, you're going to
get a significantly differentoutput, and you know the
smartest models are alreadyquite good at interpreting what
it is that you're trying to doand asking follow-up questions,

(13:05):
things like that.
But we're going to have AIeverywhere.
We're going to have AI on ourphones.
We're going to have AI on ourwatches.
We might have it somewhere else, you know, like on our
refrigerator or something.
So and a lot of these otherenvironments are not going to
have frontier AI.
They're going to have OK AI,and so even OK AI is getting
pretty good.
But I would say that just beingthoughtful about what you're

(13:26):
asking for and how you interactwith the AI will make you a
better communicator in general.
The kind of weird things thatyou've had to do, like think
step by step.
You know that's a promptengineering tactic that's
available and actually is stillquite useful in many
environments.
That's going to go away.
You know those kinds of thingsthat are more like how do you
kind of seed the model to behavein a certain way?

(13:47):
The model is going to figurethat stuff out.
You're not going to have to bethat explicit.
But again, if I say, hey, Iwant the model to write a blog
post for me and I'm not veryspecific in what I ask for I'm
going to get you know.
Not the greatest response.

Mallory (14:03):
So prompting is kind of like searching the web.
Most people will be able to doit without kind of even
realizing what strategiesthey're using, but then some
people will be better at it thanothers.

Amith (14:13):
Totally.

Mallory (14:15):
And then I also want to loop back to this idea.
You mentioned, amit, ofcultivating communication skills
on your team maybe notnecessarily prompt engineering
skills over the next few years.
How do you recommend goingabout that, especially as it
pertains to communicating withtechnology?

Amith (14:32):
Well, you know, probably the most important skill we all
can work on is getting better atlistening.
And so then you might say well,how's that related to AI?
You want to just listen to theAI do its thing.
And well, I think that thereason I bring that up is
listening requires you to befully present and to think about
what it is the other party,whether it's a human or AI, is

(14:56):
trying to communicate to you,and then be more thoughtful in
the way you respond.
A lot of people, their versionof listening, is essentially
waiting for the other person tostop talking so they can speak,
and to the extent that they'rethinking much at all while the
other person is talking, they'rethinking about what they are
going to say they're not reallyactivating fully their listening
skills, fully their listeningskills.

(15:19):
And so, in a way, that's supercritical here, because if you
really pay attention to what theAI is doing, I think you'll be
a better partner to it in asense, and you'll ultimately get
a better result from it.
But I think it's kind of cool,because what would make me
better at using the AI systemsof 2026 are probably the same
things that would make me abetter teammate in 2026 or even
today.

Mallory (15:38):
That's a really interesting take on it.
We've gotten the questionbefore in the intro to AI
webinar that I'll present to younow.
Do you think that using AImodels in this way especially as
they get more sophisticated andwe don't have to prompt them as
much, let's say, if we'rebrainstorming for a new project
that we want to launch do youthink AI models will in some

(15:58):
ways make us lazy, in the sensethat we'll do less thinking on
the front end because we know wecan kind of pass off this duty
to a technology?
What do you think about that?

Amith (16:09):
Well, I think it's a great question.
You know, I think if youbroadly look at you know kind of
the bell curve of people,you're probably going to see
more of that than not.
I also think that people whoreally want to look ahead and
say how can they be the bestversions of themselves, you know
, as a human, you're going tosay, well, what can I do with my
time?
How can I actually use thosefree cycles to work on the

(16:32):
things that I feel like I cancontribute the most to, or the
things that I get the most valuefrom personally?
What is that intersection,right?
A value creation and happinessand passion slash purpose.
You know, and I think that wehave a lot of times when people
have talked about that.
You know, it's kind of the mostgeneric, like, you know, what
do people talk about?
Like, hey, where do you want tofocus your energy and what do

(16:53):
you want to be when you grow up,or where should you go with
your career or whatever.
I think the most generic skillor the most generic advice is to
focus on skills that are atthat intersection of you know,
what brings you joy and what cancreate value for the world and
in reality, unfortunately,throughout human history, the
areas where most people are ableto create value, that's

(17:15):
economically, you know, viablefor them, you know, in terms of
income for their, for themselvesand their family doesn't
usually create a lot of joy.
But this is what's interestinghere is that with AI,
potentially, you know, there isthe opportunity to say, hey,
let's delegate all this stuff.
So therefore, we're being lazy.
Um, the flip side is, I dothink, that a lot of people are
going to be able to switch on awhole bunch of circuits in their

(17:38):
brains, categories of jobscreated, new types of work that
people are doing, but it's goingto take people wanting to

(18:02):
actually participate in that,you know, and a lot of people
are going to say this is awesome, I'm just going to go watch
more Netflix.
And that's a problem, becauseif you have that kind of a
mindset, you're probably goingto quickly find yourself not
working, and that's a generalsocietal issue that I have no
answer for, but I think there'sgoing to be a lot of that as
well.

Mallory (18:20):
Yeah, At the end of the day, we're not rats on a wheel,
even though maybe some of thetasks that we do can feel like
that sometimes.
I think we're all innatelycreative and how we tap into
that is different for everyone,but I agree with you.
I think there's the opportunityfor people to be lazy, but
there's also a very excitingopportunity in the near term for
us to be much more creative,which is exciting.

(18:41):
Moving to topic two today,OpenAI's operator agent, which
could be a good title of a movie, I think.
Openai is set to launch an AIagent, codenamed Operator, in
January 2025, designed toperform tasks on behalf of users
with minimal human intervention.
Some notable capabilities wouldinclude booking travel

(19:02):
arrangements, managing emailsand schedules, conducting online
research, completing onlinetransactions and automating
administrative tasks.
This AI agent is designed tooperate directly on user devices
, function within web browsers,use a computer to take actions
on behalf of users and processinformation in real time.

(19:24):
Now it's said that OpenAI plansto release Operator as a
research preview through theirAPI for developers in January
2025, so we would likely see afull release a few months after
that.
Amit, we've talked about Claudecomputer use on the podcast.
Now we're talking aboutOpenAI's operator.
We haven't talked aboutGoogle's Jarvis project, but it

(19:46):
is similar in the sense thatit's an AI agent that can use a
computer as well.
We maybe a bit too earlypredicted 2024 would be the year
of the AI agents, but it'slooking like 2025 will be the
year of the AI agents, but it'slooking like 2025 will be the
year of the AI agents.
I'm curious what your initialthoughts on this announcement
are.

Amith (20:02):
So, mallory, I think this is one of the most exciting
areas for everyone, because thisis a type of agent that's not
going to require anycustomization, configuration or
setup.
It's just basically saying yesto open AI or cloud or somebody
else having access to resourcesin your computer.
That might be to actuallycontrol the computer so the

(20:25):
operator can actually do stufffor you.
You can prompt it, it can gointo Excel or Word or Google
Sheets and it can actuallyoperate the computer for you.
But also, another aspect ofthis is also just having the
computer, having what's on thescreen and what you're doing be
visible to the AI model, whichby itself, even if the AI model
doesn't actually take the actionfor you.

(20:46):
There's a lot of value in thatand I'll come back to that in a
second.
But I think this is wherepeople are going to get their
first taste of automation.
They've been getting a taste ofautomation in the sense that
the model has created a lot ofartifacts for them, whether it's
blog posts or code or whatever.
But when the model startsweaving its way into other
aspects of their workflow,working for them in various ways

(21:08):
, that's going to be, I think, apretty interesting kind of
general purpose capability, thatwill get exciting.
So you know, some of the usecases you mentioned, I think,
are great.
One of the ones that I wouldmention, though, is, before you
actually even think about theoperator, you know doing things
on your computer.
If you were to say, hey, I'mjust going to let it see what's
on my computer and then I'mgoing to have a conversation

(21:30):
with it.
So let's say, for example, that, mallory, you were working on
our website, which is in theHubSpot platform, and let's say,
you were trying to figure outhow to do something.
So you might say to ChatGPT orClaude hey, I'm running into
this problem.
I'm using the HubSpot CMS, I amtrying to position this
particular element over here,but it keeps appearing over in
this other spot.

(21:50):
What can I do about it?
Right, it's like kind of aninherently visual kind of thing
that you're trying to explain.
Kind of an inherently visualkind of thing that you're trying
to explain.
You might, if you'reparticularly advanced in current
AI, you might say I'm going totake a screenshot of that.
I'm going to drop thescreenshot in Cloud or ChatGPT.
What if you just said to thoseapps okay, I'm going to let you
see my screen, and then you'renot even necessarily prompting
it.
You're basically like throughaudio, basically saying and it

(22:16):
could be through text as well,of course but you're just
essentially saying hey, claude,listen, as you can see on my
screen, I'm trying to move thiselement from here to here.
I can't do it, what do youthink?
And then Claude might respondto you and say well, actually,
mallory, you need to click thisother option over here.
Would you like me to do thatfor you and Claude does it for
you or not?
You know you can do it yourself, but even just the idea that

(22:36):
you know the computer beinggiven the right to see what's on
your screen, or portion of yourscreen perhaps, is pretty
interesting in terms ofworkflows In the world of
programming.
There's a massive value creationthere, where you know you're
working on something in adatabase or in a website or
whatever it is, and the computer, the AI model, can see what

(22:57):
you're doing.
There's also massiveproductivity gains Because a lot
of times people are going backand forth.
They have model open in onewindow, like chat, gpt, and then
the other window, side by side.
They're actually doing theirwork and they're going back and
forth, but they're having to paythe tax, right, like as I like
to describe it, of describing tothe model what's going on and
what they're trying toaccomplish, whereas if you
imagine the model just alwaysbeing there and you can talk to

(23:19):
it in audio, which all thesethings support, right that could
get really interesting, reallyfast.
That's even before the modelturns into an agent and takes
control of your device.
It just gives that model moresenses, so it could be more
effective in helping you.
And then, of course, the layeron top of that is letting the
thing actually control yourcomputer.

Mallory (23:42):
I think sharing your screen with an AI model is one
thing, but you mentionedproviding access to files that
you have as well.
I'm sure there are peoplelistening to the pod or watching
on YouTube who think I'mprobably not going to sign up
for this operator agent untilmaybe the masses try it out and
let us know if it's safe andsecure.
What are your thoughts on thatof how to approach kind of

(24:03):
privacy security concerns?

Amith (24:06):
on that of how to approach kind of privacy,
security concerns.
Well, this is a little bit of aside note, but I wouldn't
necessarily want anyone to thinkthat just because the masses
are going to go do something,that it's safe and secure,
because there's lots of peoplewho do a lot of foolish things
in the world right, both onlineand in the.
You know the world of atoms asopposed to bits, so I think
that's one thing to always bethinking about with AI.
But I agree, like it's the sameprivacy and confidentiality

(24:30):
kind of question you would askof yourself before you upload a
document to ChatGPT manually.
But if you give it access toyour full computer and you say,
hey, you can do whatever youwant to do with my computer, of
course, then you're giving it,like you know, sysadmin level
privileges to do anything, andso there are probably shades of
gray within that.
I haven't used this particulartool.
There's probably varioussettings that you have, but

(24:52):
ultimately you are giving thattool a tremendous amount of
power.
But keep in mind the flip sideof that is you do that routinely
.
You first of all use anoperating system that's produced
by a company.
It might be Google's Android onyour phone, it might be Apple
for their iOS, or it could bethe Mac operating system or
Windows or Linux or whatever,and fundamentally, just by

(25:16):
virtue of running on anoperating system, those
companies have access to a lotof stuff.
A lot of us also usecloud-based storage, whether
it's Dropbox or FileShare orMicrosoft or Google, so we are
accustomed to using a lot ofdigital services that have very
wide access to our digital lives, and so the AI having access to
a wider access of our digitallives isn't necessarily, to me,
the problem.

(25:36):
The question is is the companytrustworthy?
And that is an open questionfor all of these businesses,
because they're all brand new,they don't really have much of a
track record.
For all of these businesses,because they're all brand new,
they don't really have much of atrack record.
Openai, anthropic, mistral theyall are fairly new companies.
Mistral is like a year and ahalf old.
Openai is a handful of yearsmore than that.
So I'm not suggesting I thinkthey're not trustworthy.

(25:57):
I'm simply saying that one ofthe reasons I think people are
so comfortable with givingGoogle or Microsoft so much of
their data is because thesecompanies have been around a
really long time.
They're obviously very large.
They say that they've done allthese things to safeguard your
data and they have.
But these other companies thatare earlier in their journey,
younger, less capable in someways, or perhaps less focused on

(26:19):
security, that's where you haveto think about it.
So it's 100% a topic that youshould be thinking about deeply,
you know.
The thing I would point out isyou know, I mentioned operating
systems and file sharing toolsand stuff like that.
I've mentioned this on the poda number of times and if you've
heard me speak, you've probablyheard me say this on stage.
But a lot of people just letany AI note taker just jump on

(26:44):
in to their meetings.
It's like, hey, I'm going tohave this super confidential
meeting, I'm going to talk aboutwhatever, something.
I definitely don't want to beoutside of the confines of a
handful of folks, but I'm goingto let them all bring their meat
geek and their readai andwhatever the hell else they want
to bring with them, and I'llthink about it.
And I don't do that because I'mparanoid about this stuff, but I
have no idea who thosecompanies are.

(27:05):
Right, I don't know who theyare.
I don't know if they're on thefree plan, which means the data
you provide.
Typically, if you're free, isessentially you contributing to
that company's future, or ifthey're buttoned up, things
right.
So I think we make thesedecisions.
There's this dissonance betweenreality and our perception of
reality that we just kind ofblock off, and so I think people

(27:28):
need to be aware of all thisstuff.
I'm not saying don't use thesetools.
What I'm saying is is just behave a consistent approach to
saying, hey, I'm going to thinkabout how much diligence I do
before I use Read AI or MeetGeek or any one of these other
tools.
In a similar fashion, with atool that's going to have access
to your computer, think aboutit a little bit, because

(27:50):
everybody and their cousin isgoing to have a similar tool,
like the operator tool and likeAnthropix tool that has access
to your computer.
Everyone's going to have that.
It's going to become supercommoditized and you'll find
stuff for free.
A lot of that free stuff willprobably be malware and you know
there's going to be a lot ofcollateral damage, unfortunately
.

Mallory (28:20):
And that's from bad actors.
But even with companies thathave good intentions you're
going to have problems.
So use the to Blue Cypress, ourparent company, and saying you
know, we really want to try outthis operator thing and Blue
Cypress as a whole will have tokind of go through this
decision-making process of willwe allow it, what kinds of
actions will we allow operatorto take potentially, or another

(28:40):
tool, who will be at fault,right, if operator does
something incorrectly?
So I would like to hear alittle bit initially about what
you think that thought processwill look like in a few months
when, as a business leader, youhave to make that decision.

Amith (28:52):
Well, I'm going to look at it from my perspective.
I'm going to look at it aslet's try it out with a handful
of people who are a little bitmore savvy, who are a little bit
more cybersecurity conscious.
We do a lot of work around BlueCypress to give people basics
in cybersecurity, but somepeople are obviously both a lot
more knowledgeable but also alot more thoughtful like
day-to-day about that type ofstuff.

(29:14):
A good example is the personwho uses a VPN when they're on
the airport Wi-Fi versus theperson who's like whatever.
The latter case, that's notgoing to be one of our early
adopters.
So we kind of have a good senseof that.
Our company's not super big.
We have a pretty good idea ofwho our people are and what
their skills are, so we'llprobably start with a fairly
small number of such users andthen expand it.

(29:37):
There's risk with everything inlife.
The easiest thing is to say Iwill mitigate my risk by not
using any AI at all.
Using any AI at all, Of course,then you're creating the
biggest risk of all, which isthe fact that it's actually not
even a risk.
It's a fact that you're goingto be out of business pretty
soon.
So I think you have to balancethese things, and so for us, I
sense that we will probablyoperationalize it gradually,

(29:58):
particularly because this is apretty big potential security
hole if you don't manage itreasonably well.
I also think that part of ithas to be just going back to
education.
People don't spend time thinkingdeeply about these things as
much as maybe we'd all like themto do in theory, but we also
want them to do their jobs right, and so you know we need to

(30:18):
provide guidelines and policiesto people to say look, this is
what you're allowed to put inyour computer, this is what
you're not allowed.
With our work-issued laptops, wecan control what's allowed to
be installed on there, so peoplecan't just install garbage.
But at the same time, peopleuse a lot of personal devices
that an IT department doesn'thave control over.
So you might feel all safe andsecure because IT has policies

(30:41):
on your work laptops, but thensomeone might be logging in on
their home computer or whatever,and they might have computer
use turned on and that computermay have access to a terminal
window, to something in thecorporate environment, or that
particular computer might justhave access to files that are
business files.
Right, that happens all thetime, and how do you control
that?
The short answer is you reallycan't.

(31:02):
So you have to double down oneducation with your team.

Mallory (31:07):
What do you think about controlling or I shouldn't say
controlling, but creatingguidelines around what types of
activities these AI agents couldbe used for, because that can
be so vague as well.
You can't quite possibly listout all of the activities it can
be used for, but what do youthink about that?

Amith (31:22):
Well, I think we should be trying to do is to give
people good examples and badexamples.
So, the more we are able to say, look by example, we're going
to show you these are good usecases, and here's why these are
the reasons that these are gooduse cases.
Everything has a little bit ofrisk to it, or sometimes a lot
of risk, but the reward is suchthat it is a good example of a
good use case and we provideexamples of vendors and we have

(31:45):
agreements with vendors likeOpenAI or Anthropic or whoever
that we say, hey, we trust thesevendors, so use only them.
That way, you have guardrailson the tool use to some extent
and then also share bad examples.
Share, like the thing that Ijust mentioned, which is just
like random note taker toolspopping into your Zoom call or
whatever.

(32:06):
Don't allow that.
Don't feel bad about decliningentry to your Zoom meeting to a
AI notetaker.
It's not like you're denyingaccess to a person, so it's just
.
I think stuff like that needs tobe constantly.
There's no end to the educationprocess.
This world is moving so fast.
You've got to invest ineducating your people and

(32:27):
policies.
Policies are going to be alittle bit different by
organization, but there's somegeneral purpose kind of common
sense things.
I think the biggest thing isjust everyone needs to
understand these tools at somelevel and if they have no
concept whatsoever and they viewit as just a magical machine,
you're not going to have goodresults, you're going to have a
problem.

Mallory (32:54):
We know Microsoft owns 49% of OpenAI, so we will surely
be seeing something like anoperator agent pop into
Microsoft Copilot, probablysometime within 2025.
But based on what we've seenthus far, copilot tends to be a
little less strong than what wesee come out of OpenAI, at least
directly, at least initially.
So what are your thoughts there?
Should associations kind ofwait until they see this appear
in Copilot?
Should they start experimentingbeforehand?

(33:14):
What do you think?

Amith (33:16):
You know, copilot within the Microsoft Office world and
just in general, across all theMicrosoft products, I think, is
a super interesting thing.
It is an AI within the contextof your existing work
environment.
So if you sign up for Copilot,it has access to your SharePoint
, it has access to your OneDrive, it has access to all your
stuff and so, yeah, clearly,because it's coming from

(33:39):
Microsoft, you're going to lookat it and say that they're going
to be compliant to the samelevel of security and standard
that they have for all of yourother stuff.
So I think that's probably areasonable perspective to have.
My experience with theMicrosoft Copilot first
generation has been bothpositive and negative.
It's generally not as capableas ChatGPT or Cloud or even

(34:02):
Gemini, but it is quite usefulto have that AI sitting in your
Word document or sitting in yourExcel document, and this new
wave of capabilities thatthey've announced over the last
couple months are quite exciting.
I think it's great marketingright now because it's not
something that we actually seein practice people are using,
but the opportunities areimmense.
They're on the right track withit and Microsoft is famous for

(34:25):
this, but it usually takes themthree major iterations of a
product before it really is theproduct people want, and what I
like about that is they go outthere in the market and they put
something out there and seewhat happens.
You know people talk about likeminimum viable product and lean
startup mindset and all thisstuff as if it's some brand new
invention that people out oflike really smart young people
in Silicon Valley came up withit.

(34:45):
Well, microsoft's been doingthis since 1975, and a lot of
other people have done similarthings before them.
So the key to it is just tounderstand where these products
are in the life cycle.
You're buying into a visionmore than you are buying into a
set of capabilities today.
So you know, we're a bigMicrosoft shop.
I think they're doing a reallygood job with what they're doing
, but they're doing it at ascale that is really

(35:06):
unprecedented.
So part of the reason thelimitations exist is that they
are rolling it out for, you know, millions and millions of users
in a way that's a lot deeperthan a consumer product like
ChatGPT that doesn't have toconsider the context of, like,
your whole SharePoint and thingslike that.

Mallory (35:24):
Well, that is a wrap, everyone on episode 57 of the
Sidecar Sync podcast.
We will see you all next weekon holiday week.

Amith (35:32):
Awesome.
Thanks for tuning into SidecarSync this week.
Looking to dive deeper?
Download your free copy of ournew book Ascend Unlocking the
Power of AI for Associations atascendbookorg.
It's packed with insights topower your association's journey
with AI.
And remember Sidecar is herewith more resources, from

(35:55):
webinars to bootcamps, to helpyou stay ahead in the
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.