Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Amith (00:00):
The state of the art in
terms of what models can do.
We have barely scratched thesurface in terms of the
applications we can create usingcurrent AI.
Welcome to Sidecar Sync, yourweekly dose of innovation.
If you're looking for thelatest news, insights and
developments in the associationworld, especially those driven
(00:21):
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host
.
Greetings and welcome to theSidecar Sync, where you can get
(00:44):
all the information you needabout associations and
artificial intelligence and howAI can be applied to moving your
association forward.
My name is Amit Nagarajan.
Mallory (00:55):
And my name is Mallory
Mejiaz.
Amith (00:57):
And we are your hosts.
Before we get going on anexciting episode where we're
going to be talking about acouple of really cool new AI
tools and how they apply to theworld of associations, let's
take a moment to hear a quickword from our sponsor.
Mallory (01:10):
Digital Now is your
chance to reimagine the future
of your association.
Join us in the nation's capital, washington DC, from October
27th through the 30th, forDigital Now, hosted at the
beautiful Omni Shoreham Hotel.
Over two and a half days, we'llhost sessions focused on
driving digital transformation,strategically embracing AI and
(01:30):
empowering top associationleaders with Silicon
Valley-level insights.
Together, we'll rethinkassociation business models
through the lens of AI, ensuringyour organization not only
survives but thrives in thefuture.
Enjoy keynotes from world-classspeakers joining us, from
organizations like Google, theUS Department of State and the
US Chamber of Commerce.
(01:50):
This is your chance to networkwith key association leaders,
learn from the experts andfuture-proof your association.
Don't just adapt to the future.
Create it at Digital Now.
To the future Create it atDigital Now.
Amit, how are you doing thislovely Wednesday morning?
Amith (02:15):
I'm doing great.
The weather here in New Orleansis actually quite nice and
enjoying that, so it's great andthinking a lot about the folks
in Florida who are bullseye forthis new hurricane that's in the
Gulf at the moment, so hopingthat that is somehow dies down a
little bit before it hits them.
Mallory (02:29):
For sure.
Yeah, our hearts go out toeveryone in Florida who just got
impacted by one hurricane andare now facing another.
It's devastating, for sure.
Amith (02:40):
I think we've talked in
the past about a couple of
different times about AI weatherforecasting and models.
I think in the talked in thepast about a couple different
times about AI weatherforecasting and models.
I think in the world ofhurricane prediction, one of the
good things is in the last, Idon't know, in the last probably
the last 10, 15 years theaccuracy of the forecasts have
already been really really good.
Part of what's not easy toforecast is the speed at which
(03:01):
the hurricane is going to comein and storm surge and things
like that.
But they've gotten pretty goodat determining the path of the
hurricane, so it's much lesslikely to be like oh well, we
think it's going to hit Tampaand then it ends up hitting
Pensacola or something instead.
So that's good.
But I think, as these thingsbecome more and more accurate to
say, hey, like three weeks fromnow we're going to have another
hurricane come.
The more time people have toplan, the better, obviously with
(03:25):
these kinds of events.
Mallory (03:27):
Yeah, I mean, obviously
this is not our area of
expertise by any means, but itsounds like this is an area
where AI will continue to seeimprovements in terms of
predictions, and I'm assumingthere's probably lots of
artificial intelligence alreadybeing used to predict these
things.
Is that how that works, well?
Amith (03:42):
the current models that
are in production both in Europe
and North America are numericalmethods that predate AI.
So they're not actually usingAI methods for the National
Hurricane Center, for theEuropean models and that's for
the general weather forecasting.
So it's very highcompute-intensive numerical
methods that are actually reallygood and really accurate, but
(04:04):
they're super, super expensiveand their forecasting is at this
very, very high level in termsof granularity.
So part of what AI allows youto do and we've talked a little
bit about this both with theDeepMind work and also with
Microsoft's model, which is moreatmospheric composition stuff
is that you're able to getreally granular and have near
(04:25):
instant forecasts at the verymicro level, so being able to
say what's going to happen inthis zip code or this postal
code so I think that's going tobe an interesting innovation.
So the two may complement eachother.
At some point the AI modelsmight supersede the traditional
models, but that's hard to sayright now.
But I'm really excited about itbecause I think further out you
(04:45):
can look, the better and themore specific you can get, the
more it helps people, becausewhat happened in Asheville
obviously was terrible and verydifficult to predict, because
you wouldn't have thought thatup in the hills that you'd have
those kinds of results.
But it's a combination of thesoil conditions and how long the
hurricane was over that areaand how much precipitation had
(05:06):
dropped.
Obviously that caused suchterrible damage there.
Mallory (05:10):
Yeah, for sure it was
very eye-opening for me because
I often I don't ride off badweather, but it's not something
I look at every day, unless Ifeel like there's going to be a
really bad storm or tornado orsomething like that.
But I think you make a greatpoint with Asheville that it
just seemed so unexpected and sohaving that technology that
(05:30):
could even predict some percentlikelihood that that would have
happened, I mean would have beenvery helpful.
Well, we are thinking abouteveryone in the hurricane's path
in Florida and wishing everyonethe best.
Today, in episode 51, we havetwo exciting topics lined up for
you all.
The first of those will beChatGPT Canvas and the following
(05:53):
topic will be MicrosoftCopilot's Wave 2.
Chatgpt Canvas is a newinterface launched by OpenAI
this month to enhance userinteractions with ChatGPT for
writing and coding projects.
Canvas opens a separate windowalongside the main ChatGPT
window, allowing users to workon writing and coding tasks more
(06:14):
efficiently.
The setup enables users togenerate, edit and refine
content in a dedicated workspace.
If this sounds slightlyfamiliar to you, it's because
Anthropics Cloud has a similarfeature called Artifacts that
we've covered in this podcastbefore, but I will dive into how
they're a little bit differentfrom one another.
Within ChatGPT Canvas, userscan highlight specific sections
(06:37):
of text or code to receivetargeted edits or suggestions
from ChatGPT, making it thatmuch easier to iterate and
improve final output.
Canvas offers several featuresto improve written content as
well that are built in, and I'mgoing to share my screen in just
a bit to show you these.
But we've got lengthadjustments so users can shorten
(06:57):
or lengthen their documents.
Reading level adjustments, afinal polish where ChatGPT can
refine text for grammar clarityand consistency, and emoji
formatting, so a basic markdownformatting and emoji insertion
are supported as well.
With coding, we've got codereview, comment addition, which
(07:17):
is automatic generation ofinline documentation,
identification and fixing ofbugs and conversion of code
between different programminglanguages.
Chatgpt Canvas is currentlyavailable in beta for ChatGPT
Plus and Teams users, with plansto extend access to Enterprise
and EDU users.
Openai intends to make Canvasavailable to all ChatGPT users
(07:41):
once it exits the beta phase.
To access Canvas, you canselect GPT-4.0 with Canvas if
you have one of the accountsthat we just mentioned.
You can also ask chatGPT to useCanvas, which will trigger it
or it automatically detects whenit thinks a Canvas would be
beneficial to you.
So right now I am sharing myscreen to show you all what
(08:04):
Canvas looks like within ChatGPT.
I used the transcript from lastweek's episode, actually, where
we talked about our brand newand improved AI learning hub.
I asked ChatGPT to act as anexpert copywriter who is skilled
in writing, copy that convertsand then I asked it to write a
blog about our new AI learninghub launch based on that portion
(08:25):
of our podcast transcript fromlast week.
As I did this, it automaticallypopped up this Canvas window so
you can see it generated a blog, a pretty decent size blog, and
reading through this just at aglance, I would say this is
pretty good.
There's a couple of subsectionsa new approach to AI, education
(08:46):
for association, new featureslike our AAP certification and
the community.
Why AI matters now and are youready to lead your association
into the future is kind of theCTA part of that blog.
What's really neat about thisis you can actually make edits
in the canvas, which is notsomething that you can do with
(09:07):
Claude at this moment.
I imagine that they will bechanging that really soon based
on this release of canvas.
So there's a sentence here thatsays we've moved to a better
learning management system toenhance your experience.
I can actually go in here andsay, well, I don't really like
the word experience, maybe Iwant to say learning journey
instead, and I can make editsdirectly here, which is nice, so
(09:32):
you can say goodbye to the daysof having to generate a whole
blog or social post at once,take it out, put it into a Word
doc and make your edits there.
You can actually make all youredits right here within ChatGPT.
Another interesting featurethat I will point out is the
ability to highlight a specificsection of text and then make an
(09:54):
edit just based on that.
So right now I'm working withthat same sentence that says
we've moved to a better learningmanagement system to enhance
your learning journey.
I think maybe our audiencemight be curious on why we did
that.
So I'm actually going tohighlight this section of text
and say add an explanation onwhy we moved to a new LMS, and
(10:18):
you can see right here in Canvasit edited just that portion of
the blog.
And then, finally, I want topoint out these built-in writing
features that I discussedearlier.
So one of those is adjusting thelength.
If you click this button in thebottom right-hand corner,
you'll see there's actually thisdrag tool where I can make it
(10:40):
longer, longest, shorter,shortest.
So I'm going to make it alittle bit longer, just to see
what happens, and we see thatit's rewriting the blog to be
even more wordy.
I will also demo that you canchange the reading level within
(11:04):
Canvas as well, which issomething that we're going to
discuss in terms of transformingcontent that you already have.
Wow, it made this significantlylonger, so now I want to adjust
the reading level.
The features that it has builtin right now are high school
reading level, college, graduateschool, middle school and
kindergarten.
So I'm going to up it to, let'ssay, we're writing this for
(11:27):
folks in graduate school, justto see what this transformation
looks like.
It's not all that different.
I'm curious to hear what youthink, amit, on this, but you
can see it's a bit wordier,maybe a bit more verbose.
What do you think?
Amith (11:44):
I'd love to see the
kindergarten version.
If you want to change to that,okay.
Mallory (11:47):
That's what I was
thinking.
Maybe we can kindergartenify it.
Might be a lot more.
Amith (11:51):
Honestly, it might be a
lot better, because a lot of
times the simpler the copy canbe, the better.
So let's see what happens here.
Mallory (12:02):
Exactly.
I don't think this has everbeen done.
Let's talk about the AIlearning hub for kindergartners,
who debatably couldn't evenread this, so that's kind of
interesting.
Okay, ai or artificialintelligence, is growing and
changing fast, and Sidecar wantsto help you learn about it.
I don't know that.
I would say this is quitekindergarten level, amit.
What do you think?
Amith (12:22):
Yeah, it's probably a
little bit above that, but it's
interesting, because what it didis it actually summarized it
and made it a lot shorter.
That's the main thing I can seefrom a quick scan of it.
Mallory (12:32):
And it seems like some
of the words are really simple,
fast, easy.
I'll also point out I haven'tdone this one yet that you can
add emojis to whatever you'reworking on in Canvas.
We do like to use emojis hereat Sidecar Ooh, okay.
Amith (12:49):
That's a lot of emojis
right there.
Mallory (12:50):
There's an emoji, or
more than one, in every single
sentence it looks like, but thatis Canvas.
So, as I said, you can makeedits right here.
You can also continue workingin the regular chat interface as
you're used to with ChatGPT.
But to me, being able to makethese inline edits is huge
(13:11):
Really.
It's a huge time saver, butit's also huge in terms of the
quality of output that you'regoing to get when you're using
ChatGPT, because you canactually infuse your own
expertise and your own styleinto what you create.
So that was a quick little demoof ChatGPT Canvas.
Now, amit, I'm going to quoteyou really quickly.
You messaged me on Teams aboutthis and you said ChatGPT 4.0
(13:34):
Canvas beta is ridiculously welldone and lightning fast.
So I'm going to ask you to backup that quote with our
listeners and viewers.
Amith (13:42):
Sure.
Well, first of all, for thoseof us, for those of you that are
only listening, you might wantto check us out on YouTube,
because that's where you can seethe full videos and see the
demos.
We're going to be doing moreand more of this over time,
where we have interesting thingsto show.
So, as far as why I wasimpressed by it, I thought they
just really did a good job withthe software engineering of
(14:03):
creating an application that waseasy to use, pretty intuitive.
I liked the inline editingcapability that was really cool,
and I also like that you canhighlight things and tell it to
change just that one portion.
So it's like interacting withanother person where you can say
hey, mallory, I really likethat blog you wrote, but this
one piece I'd like you to changeit in this way or that way.
(14:23):
It's more of an approximationof how you'd collaborate with
another human, as opposed to theway we've been dealing with AI.
As amazing as this AItechnology has been in creating
copy or creating code, generallyspeaking, each time you asked
for a change, you would rewritethe whole thing, and so you'd
have these monstrously longchats with Clod or ChatGPT prior
(14:46):
to Artifacts and Clod and priorto Canvas and ChatGPT, where
you would see the same thingover and over and over again.
It became super repetitive.
So it's hard to find the changeand hard to actually work with
the tool quite as fluidly asyou'd like.
So to me, this is not a changein the underlying AI model at
all.
This is just more of thesoftware engineering on top of
(15:06):
the model.
That makes it more effective,makes it more easy to use,
essentially.
So I was blown away by it.
For that reason, and I thinkthere's so much upside, this is
a really good illustration ofsomething we've talked about a
lot on this pod, which is thatthe state of the art in terms of
what models can do, we havebarely scratched the surface in
(15:28):
terms of the applications we cancreate using current AI.
So here's a thought experimentfor you.
Let's just say, hypothetically,ai just froze, that there was
no improvement beyond 4.0 and0.1 caliber models.
Things just stopped.
There was no improvement at allfor, say, the next five years
(15:48):
or the next 10 years in AI Veryunlikely for that to happen,
obviously, but let's just say itdid.
In that scenario, let's saysoftware engineering, though,
was going full blast aheadsaying what can we do with these
models.
There would be innovation afterinnovation after innovation,
where we'd be stacking upcapabilities on top of one, on
(16:08):
top of the next, on top of thenext, creating new applications
for use.
It's kind of like when themobile phone first became a big
thing in the context of, Ishould say, smartphones like the
iPhone in its earliest versions, even apps, products before
that, or when Android came out,it was a platform that needed
applications, and some of thefirst applications were games
(16:29):
that took off.
Then, after that, people founduse for productivity, but it was
a platform in need ofapplications and in some ways,
ai is a new paradigm forcomputing.
You need to have applicationson top of it that are actually
useful for end users to takeadvantage of.
So what I loved about this isthat this is a really big step
forward in making these toolsjust so much more helpful in
(16:50):
day-to-day work for typical endusers.
It's more intuitive becauseyou've gotten used to editing in
line and Word and every otherapp, so of course, you expect
the AI to work that way.
So to me, that was what wasexciting.
App.
So of course, you expect the AIto work that way.
So to me, that was what wasexciting, chatgpt is
dramatically bigger than Claudein terms of its adoption in the
user community.
I think it's like eight to oneor something like that, in terms
(17:14):
of the number of users inChatGPT versus Claude, and so I
think this will have a bigimpact on that user community.
I think Claude's a fantasticproduct.
By the way, I know you use itprobably as much or more than
you use ChatGPT now, or have youswitched more over?
Mallory (17:26):
I pretty much
exclusively use Claude at this
point, but this might be enoughto take me back to ChatGPT with
the inline edits.
That's very helpful.
Amith (17:34):
I think that this Canvas
feature is better executed than
Claude's artifacts but artifactsis great as well better
executed than Claude's artifacts, but artifacts is great as well
and certainly it shows howhyper-competitive this space is,
where two of the leading AIcompanies are innovating this
quickly in the market.
That's exciting as well,because we're going to see more
and more benefit.
Ultimately, competitionbenefits the end user of the
(17:56):
technology tremendously, so Ithink it's exciting for all
those reasons.
So for me, I've been using itquite a bit since it came out.
I found for anything that'scomplex if I'm just having a
quick chat with the AI aboutsome topic I want to learn more
about or whatever I don't reallyneed Canvas.
But let's say I'm working on,you know, I don't know.
(18:16):
Let's just think about like atechnical design for some new
software application that we'recooking up and in that process
we're kind of iterating back andforth on.
You know, what should thisdesign look like?
How should it work?
What's the database look like?
What's the?
What is the business workflowlook like?
Those are all like technicaldetails that you might have in a
(18:37):
document or in some kind offormat and you want the AI to be
able to work in context Say OK,no, that part of the database
design isn't quite right.
Change it this way.
And up until this type of UI,you'd have to go and basically
rewrite the whole thing over andover, which was really hard to
keep track of.
It was still orders ofmagnitude better than doing it
by hand, but this is just thatnext step function in terms of
(18:59):
productivity gain in my mind.
Mallory (19:03):
So that's why I was
super excited about when I saw
it and, as we've mentioned, withCanvas these inline edits are
really exciting.
So with Claude's artifacts youcan in the chat say, oh, this
bullet is not exactly what Iwant.
Can you change it ever soslightly?
But it's kind of time consuming.
And now, seeing what this newoption is, which is just
highlight, add a quick note ofchange, just this bullet point.
(19:25):
That's exciting.
Now I still think I like theoutput for me personally that
Claude creates, but this givesme a lot more flexibility with
infusing my own thought intowhat I'm writing.
So that's very appealing.
Have you tried this out withcode creation at all?
Amith (19:42):
Not with code yet I think
that'll be a natural next step
for it.
A lot of times when I'm doingcoding I'm working within the
Visual Studio Code or VS Codeenvironment and working with
Copilot in there.
So there's a Copilot built intothe code editor.
Sometimes if I'm doing codingI'll go to ChatGPT or sometimes
other tools and ask it to createnew code based on some general
(20:03):
ideas and then once you're moreinto like the refinement process
, you're in the code editor andCopilot in there is really
really good at helping you dokind of the incremental stuff.
So I do think it hasapplicability for coding.
But at least in my workflow andwhat I tend to see other people
do that are doing similar work,is you do your kind of your big
chunks of new work in a chat,gpta type environment and then
(20:25):
you pull it over into a codeeditor and then you're doing
more incremental work there.
Interestingly, there's been alot of talk recently about new
tools for software developers,some that are fully automated,
that are things that are capableof like doing full end-to-end
software development, and somethat are in IDEs like Visual
Studio.
There's another one calledReplit which is very popular.
(20:46):
That has really great co-pilotstyle assistance.
There's a new one called Cursor, which is actually a fork of VS
Code that has enhanced AIcapabilities, like built into
the environment and I thinkyou're going to see more and
more of that with applicationsis that the app that you use is
going to have more and more kindof AI native workflow built
into it.
We've been talking about thatfor a long time how, if you're
(21:09):
in Canva, it makes sense to useCanva for what it's good at and
have AI tools built in and thatparticular product.
They've done a lot of that.
That's true for the Adobe suite, that's true for, obviously,
microsoft with Copilot.
You're going to see these toolsessentially become part of the
workflow.
I do think the leading edgetools like Claude and ChatGPT do
(21:31):
have a place and they're goingto have more power probably than
some of the embedded tools thatyou'll find.
But it's going to beinteresting to see like will you
use ChatGPT at all in threeyears, or will these
capabilities just be woven intoother productivity tools?
You know, has ChatGPT becomethe super app that can do
spreadsheets and presentationsand graphic design and Word
(21:52):
documents, or is it, you know,more of a companion tool?
It's going to be reallyinteresting to see how that
shifts over time Forassociations.
One thing that you should bethinking about is how does this
affect member experience?
So, if you think about the wayyou engage with your audience,
how are they engaging with yourcontent?
How are they using your content?
(22:13):
A lot of times, associationsput out a bunch of different
products.
They'll publish journalarticles, they'll have, in some
cases, books, they'll have,obviously, meetings and webinars
all of these different,essentially knowledge products
of various kinds.
What do people actually do withthose resources when they join
your association or just buyindividual resources?
(22:36):
What's the point of whatthey're doing?
Do we really even know that?
As associations, have we donethe user research to find out
what happens after one of ourcustomers or members actually
accesses one of our informationresources?
How are they using it in theirday-to-day job?
And the reason that's importantto understand is what can we do
to make our resources morevaluable?
(22:57):
Do we want our resources to besomething that could essentially
have like a Canvas equivalent,where you know users can
directly interact through yourassociation to make changes, to
make it work better for them?
Right, using AI tools directlyas part of their engagement
experience of the association tostill be in your context, with
all the value you provide, butbe able to adjust things to
(23:19):
hyper-personalize that contentand that engagement experience
for their needs.
And I think that a tool likeCanvas should be a thought
experiment for associations tolook at their own resources,
their own software offerings and, by the way, everything's a
software offering, whether it'sa downloadable PDF or a blog you
read or a webinar.
They're effectively all in thismodality.
(23:40):
How can we think about ourengagement experience to shift
from a chat GPT you know classicstyle of engagement to Canvas
right?
Those kinds of shifts reframethe expectations of the broader
public, and that includes yourmembers.
So if you don't think about howto engage people in ways that
are contemporary and alignedwith what they're getting in
(24:02):
consumer-grade tools like this,you're probably going to end up
in a bad situation at some point.
So it's an opportunity to drawinspiration from it.
Mallory (24:12):
One AI knowledge
assistant that we've talked
about on the pod before is Betty, which is in our same family of
companies.
Amit, have you thought aboutadding some sort of a
Canvas-like feature within aknowledge assistant like that?
Potentially?
Amith (24:26):
I think Betty having a
Canvas-type feature would make a
ton of sense.
I think that's true also forSkip.
Skip is our AI data scientistthat generates reports.
Skip already kind of works inthat modality a bit.
But this idea of a canvas, oressentially like a workspace, I
think makes sense for thosekinds of applications pretty
much most everything, if youthink about it, because we've
(24:48):
had this.
I think the chat modality ofconversational interaction is
awesome, but ultimately, whetherit's an artifact or a canvas or
a document, whatever you wantto call it, you're usually
creating something like that,right, you're creating some kind
of durable result of theconversation, and so what we've
had so far is just thislong-running chat and you've had
(25:10):
to copy and and paste, like outof the chat, drop it into a
Word document or a Google Docand then do the final work there
, and they're simplystreamlining it.
Saying what ChatGPT folks,their product management team,
is probably saying is likewhat's the use case?
Like how are people actuallyusing this?
How can we further streamlineand make it more efficient, add
more power to it?
And so that's where I see thishappening in general.
(25:33):
So one thing I would suggest isand I'm sure this is on the
roadmaps of these products ismulti-user collaboration along
with the AI.
So if you say, okay, we canshare.
Like if you had that chatthread that you just had in
ChatGPT, you can share it withme and I can look at it, but I
can't interact with it.
So it'd be really cool if youhad, like a Google Docs style
(25:54):
multi-user collaboration whereChatGPT was in there and you
could tag ChatGPT to make achange or we could tag another
person to review it.
That really would blend theworkflow that a lot of people
have gotten used to with GoogleDocs and Microsoft 365.
And, of course, copilot in theMicrosoft realm is going to do
exactly that and is alreadydoing exactly that in many
(26:17):
respects.
So I know that's a little bitof a preview of our next topic,
which we can get to in a minute,but I find these tools really
fascinating because it's goingto fundamentally change the
workflow.
Mallory (26:27):
Yep, we will be talking
about exactly that Copilot
pages in the next topic.
But one more question hereMeath, I wanted to spend a
little bit of time talking aboutcontent transformation because
when I demoed and shared myscreen, we saw there are these
built-in shortcuts, essentiallyto help you transform your
content from long to short oradjust it based on reading level
.
I would say this is amarketer's dream in terms of
(26:49):
having one piece of content thatyou can reuse over and over
again.
But I kind of wanted to have alittle discussion on the
strategy behind that and theidea of quality over quantity.
Using the Sidecar Sync podcast,for example, we've talked about
how we use the transcript fromthis podcast to create blogs.
Theoretically, we could createmany blogs.
(27:10):
I'd say three to six.
We could honestly probablycreate more than that.
If we broke down the topicsinto subtopics, we could create
10 social posts, a six stepemail sequence, 10 social clips,
a whole course based on everypodcast episode.
We could do all those things intheory, but perhaps we
shouldn't, I don't know.
So I just wanted to talk to youbriefly about, kind of now that
(27:31):
content transformation isreally so easy it's at our
fingertips how you wouldrecommend approaching that.
Amith (27:39):
Well, you know, when I
think about it, I look at it and
say are we trying to?
What's our objective?
Are we trying to reach abroader audience?
Are we trying to engage ourcurrent audience more deeply?
Maybe both, maybe there's othergoals.
So let's talk about each ofthose dynamics a little bit
separately.
So let's say we want to reach abroader audience.
So, for example, let's say thatour association is an
(28:02):
association of electricalengineers and so in our domain
we have a lot of very technicalcontent about electrical
engineering and that's our coreaudience is this it's this
center of the Venn diagram,right, it's this electrical
engineering group, and so ourcontent is written by electrical
engineers for electricalengineers.
(28:22):
But let's say that we need tocollaborate with some other
domains.
Let's say there's somescientists out there we need to
work with and they're notelectrical engineers, so their
language and knowledge is alittle bit different.
Say they're physicists and wehave physicists and electrical
engineers now working together.
Or maybe there are mechanicalengineers who we need to
(28:43):
collaborate with.
So they're still obviously allvery technical people, highly
educated people.
So it's not about a readinglevel but it's more about domain
knowledge.
So can we take a journalarticle written for electrical
engineers and then make it morecomprehensible to, say, a
mechanical engineer who's at thesame level of intelligence,
education, all that kind ofstuff, but comes from a
(29:03):
different domain, that can beinteresting.
So that's a form of contenttransformation where we're
trying to reach adjacentprofessions or adjacent
verticals that I think AI couldtremendously help with.
Because if you have an AIthat's an expert in your domain
and let's say there's another AIIn this example, let's say
there was an AI for theMechanical Engineers Association
(29:25):
and an AI for the ElectricalEngineers Association and those
respective AIs were trained onthe corpus of content from each
association.
So they were domain experts andthey were able to collaborate to
say, hey, let's take theelectrical engineering article
and the mechanical engineeringagent, if you will, and the
electrical engineering agentcould then collaborate.
To say, hey, let's make thisreally relevant for the
(29:46):
mechanical engineers.
Mechanical engineering AI wouldsay, okay, this is what I think
might make sense.
And the electrical engineeringAI would say, is this still
correct in the context of whatwe're trying to communicate?
And then the mechanicalengineering AI would look at it
in another pass and say, yeah,will this make sense to my
typical audience?
Right?
So there's like a multi-agentcollaboration going on there to
(30:07):
do a domain, a cross-domain kindof knowledge transfer.
Another example might be let'ssay that we're a dental
association and so our primaryaudience are dentists, but
people that are super, superimportant to you know, basically
oral health generally aredental hygienists, right?
Those folks are actually doinga lot more, spending a lot more
time with a patient in dentaloffice than the dentist.
(30:30):
So let's say, we have somecontent from the dental
association that isdentist-centric, but we want to
make it available to thehygienists, which have their own
association and their owncontent.
Same kind of idea, right?
In that case, maybe it's adifferent tier of the profession
, right?
Or medical assistance relativeto physicians, and so on.
So I think there's a ton ofopportunity here.
(30:51):
This could open up content formore use.
It could improve the quality ofcare in the case of healthcare
world, improve the quality ofengineering in the engineering
world.
So it's really exciting.
I focused there initially indescribing this, because the
most obvious example of contenttransformation is one language
to another.
So saying English to Spanish,to French, to German, to Dutch,
(31:13):
to whatever and that's, ofcourse, a wonderful capability
AI gives us there.
A domain-specific languagemodel is also important because
you want your mechanicalengineering text to make sense
in French and a lot of genericLLMs like whether it's open
source or chat.
Gpt may or may not get thatright, but if you have a French
(31:33):
mechanical engineering LLM andan English mechanical
engineering LLM, or preferablyone trained on both languages,
you're way more likely to get agreat transformation outcome
there.
So those are just some usecases that I think are exciting.
And to your earlier point abouttransformation in terms of like
taking bits and pieces of apiece of content and repurposing
them, I totally agree.
(31:54):
That's a really exciting areaand you have to think about your
strategy and say, okay, whywould we break it up into so
many chunks?
Is that to try to personalizethe way we reach out to people
and say hey, mallory, we reallyknow that you're super
interested in, let's say, emailstrategy, and so we bring you
more content that's related tothat particular topic.
(32:14):
And this guy Meath, he doesn'tlike email, he's more interested
in social media.
Give him more of that, right.
So, like a classicalpersonalization strategy, that's
certainly one way to leveragecontent.
Transformation is to bring outthose pieces Anytime we can save
people time.
I think that's interesting,right Because everyone's busy
and so like, the shorter we canmake something, and then let
them choose to go deeper into it, the better.
(32:37):
So summarization, of course, isanother, now classical example
of what language models arereally good at.
So hopefully that helps alittle bit.
Mallory (32:45):
For sure.
So, before going out there andtransforming all of your content
into various other formats andmodalities, ask yourself why.
It sounds like that's theessential question.
Why are you doing it?
Are you trying to broaden youraudience?
Are you trying to personalizethe way that you reach your
members and then go from?
Amith (33:01):
there.
Yeah, and another aspect oftransformation it's worth noting
, just to link this back to arecent episode that we did on
unstructured data istransformation from unstructured
to structured insights, and wetalked a lot about that on that
pod episode, which was a verypopular episode, and I think the
utility in that is quite high,where you can say okay, for my
own content or for otherpeople's content, can I answer
(33:23):
certain structured questionsabout that content on a
consistent basis, usingtechniques like we described in
that other episode which we'lllink to in the show notes.
And basis using techniques likewe described in that other
episode which we'll link to inthe show notes, where we talked
about how can language modelsactually go through, read the
content or watch the content andthen answer a set of structured
questions that might be veryinstructive in terms of a lot of
(33:46):
different applications you dodownstream from there.
So the examples we talked aboutin that other pod, for example,
were take a corpus of researchpapers and answer a set of
questions like generatingmetadata, essentially from those
papers.
That's a form of transformationworth noting as well.
And you can do the inverse aswell, where you can say we want
to generate content based onthese topics.
(34:08):
Use our existing corpus ofcontent and generate a new
article based on these five orten topics.
Mallory (34:15):
Moving on to topic two,
microsoft CoPilot's Wave 2,
which introduces severalsignificant updates and new
features to enhance AI-poweredproductivity across Microsoft's
suite of applications of the newrollouts is exactly what Amit
just explained, basically, andthat is Copilot Pages, which is
(34:40):
a new collaborative canvasdesigned for multiplayer AI
interactions.
It allows users to pullinsights from work data into an
editable document, to share andcollaborate on AI-generated
content with your colleagues andto iterate with Copilot like a
partner, adding more contentfrom your data, your files and
the web.
So it's a partner adding morecontent from your data, your
files and the web.
So it's essentially ChatubtCanvas, but built into your
(35:00):
Microsoft suite.
With Excel, we're seeing moresupport for formulas, data
visualization and conditionalformatting, as well as Copilot
and Excel with Python, whichenables advanced analysis using
natural language.
In PowerPoint, we're seeing anew narrative builder feature
for creating first drafts ofpresentations from prompts.
(35:21):
I actually went into PowerPointbefore this and realized we
have this available to us now,so I'm going to be testing that
out.
And within PowerPoint, you alsohave a brand manager to ensure
Copilot works within your brandtemplates.
In Outlook, we're going to see afeature called Prioritize my
Inbox to help you manage emailsmore efficiently.
It can do things like recognizewho your manager is and who
(35:44):
your key contacts are and itthen ranks your highest priority
urgent emails based on that.
Within Teams, we're seeingimproved meeting summaries
incorporating both audiotranscripts and the chat content
.
So normally in meetings I feellike within the chat interface,
we'll have things like questionspop up or little notes people
might not want to unmute, sothey'll add it into the chat.
(36:05):
So now with these summaries,it's actually going to reference
all of the chat information aswell as the transcript from the
call, which is really exciting.
In OneDrive, we're going to seebetter ability to find files,
generate summaries on your filesand compare documents without
opening them.
So the example that they gavein the video was when you have
two files that have very similarnames and I am guilty of this
(36:28):
you can actually just askCopilot to compare them, to
highlight the differences in thetwo documents, without opening
them.
That sounds incredible to me.
And then, perhaps maybe the mostexciting, is Copilot Agents.
So Microsoft's introducingagents, which are AI assistants
that can perform specific taskswith varying levels of autonomy.
These range from simple promptand response agents to
(36:50):
autonomous agents that canactually take action.
From simple prompt and responseagents to autonomous agents
that can actually take action.
They can be built using the newagent builder, powered by
Copilot Studio, and they'reaccessible in Microsoft 365
applications using the atmention.
They can also be in your team'schats and can take action if
prompted, which I think isexciting.
Copilot now uses OpenAI'sGPT-4.0 large language model.
(37:13):
Responses are more than twotimes faster on average and
Microsoft says responsesatisfaction has improved by
nearly three times.
So, amit, I watched this videowith the initial rollout, but I
watched it again yesterday toprep for this podcast and it was
really exciting.
But it makes me think about theinitial rollout of Microsoft
(37:34):
Copilot, when we watched thevideo and we said this will
change everything and admittedly, you and I, I would say, are
not the biggest Copilot users.
So I'm curious to hear yourtake on wave two.
Amith (37:47):
I'm excited about all of
the things you just mentioned.
I think the Outlook Prioritizemy Inbox piece, I think, is
amazing.
I had a former business partnerof mine in a different company
long ago once had his own.
This is years ago.
He had this prioritizationstrategy, which was pre-AI,
where he basically didn'trespond to anybody's emails at
all until they emailed him againsaying, hey, what's up with
(38:09):
this?
He's like, oh, if it'simportant, they'll email me
twice.
Wow, that may not be thestrategy that Outlook's AI does
I hope not, but it seemed towork for him at least for a
while, until people got reallyupset.
But anyway, I think everyonesuffers from email delusion, so
having help with that will begreat.
(38:32):
The Copilot Pages concept youtalked about is exactly what I
was describing earlier, whereyou can have a multi-user
collaborative environment andthen, with the agent capability
we've talked a lot about agentson this pod and the rest of our
content being able to takeaction based upon what's going
on.
So imagine a collaborativespace where you're working on a
new blog post and you have acouple of people that are
collaborating on it, along withCopilot itself.
You get the blog to where youwant it and then you say at, and
(38:55):
then you tag an agent which isyour publisher agent.
You say, go ahead and publishthis to my HubSpot CMS and then
that agent we've trained it toalso automatically break it up
into five or 10 social posts anddo all those other things
downstream.
So that's a good example of howyou can add your own custom
agent using the agent's toolkitthat you mentioned and connect
(39:16):
it to other applications.
That would be a non-trivialexercise to set up all that, but
it's possible to do now withinthe workflow described here.
Something that's probably alittle bit lower on a lot of
people's radar, but supervaluable, is this idea of code
generation in Excel with Python.
So Python, for those thataren't familiar, is a very
(39:40):
popular programming languagethat is particularly good for
data analysis.
A lot of people that are in themachine learning and AI space
use Python as their primarycoding language to both train
models and do other kinds ofdata analysis.
And the ability for Excel andCopilot in Excel to interact
with the user and then justgenerate programs that can do
stuff with the data in Excelpotentially could be really a
(40:01):
game changer for a lot of people.
So you imagine someone in yourfinance department that's
working on a financial forecastand they want to do something
beyond their capability.
You just ask Copilot.
It might generate some formulas, it might use pivot tables or
it might actually go and writecode in Python.
So that's a very powerful thingto explore.
(40:21):
Powerpoint's an area that Ithink has a lot of potential,
because you know, and I'veactually this is the part of
Copilot I've probably used themost over the last whatever it's
been six, nine months sincewe've had it where I'll take a
Word document after I'vepublished something and I'll say
, hey, powerpoint, here's a Worddoc.
Generate a set of slides fromthis document, and I found that
to be very effective, even inthe current version of Copilot.
(40:43):
So those are some of the thingsthat I'm probably most excited
about.
To me, I think the better modelis going to be the main reason
to start using it.
So I found that Copilot'soriginal release with the I
think it was GPT-4 Turbo thatthey were using was
underwhelming, because we'dmoved on in ChatGPT and in
(41:04):
Anthropix Cloud to much morepowerful models and the version
they had in Copilot seemed to beunderpowered models and the
version they had in copilotseemed to be underpowered.
So it was kind of like you getused to flying on an Airbus A380
at 570 miles per hour and thenyou have to go back to flying
around in a Cessna at 250 milesan hour or whatever, so you're
not going to want to do that.
(41:24):
That experience doesn't feelright.
So if the model level withincopilot is at parity with what
you get in chat GPT, I think itwould get a lot more natural
usage.
The other part of adoption,though, is habit, right.
So we're not in the habit ofusing Copilot in Word and in
Excel and in PowerPoint.
We are more in the habit ofusing an extra tool like an
(41:46):
Anthropic or a ChatGPT.
So I think if we kind of pushourselves to try to use Copilot
more, it'll help, because havingthe tool woven into the primary
workflow tool that we're usingfor day-to-day work, I think
should be able to createefficiencies for us.
So I'm excited about it.
I think Microsoft, in recentyears, has done a tremendous job
(42:08):
with their marketing.
That original Copilot launchthey did last year was amazing.
The product itself, I think,holds the promise of living up
to that video eventually, butobviously it's not there yet.
Mallory (42:20):
Oh, I have full faith
that it will eventually be just
as it was in that video.
We need to get them to make avideo for the Learning Hub
because that was trulyimpressive to watch and I'm
going to put it out there as anaccountability system as all of
these new features roll out.
I want to create a Microsoftco-pilot course for the AI
Learning Hub.
We get tons of feedback andquestions all the time whether
(42:42):
we have that content, and so I'mmaking the commitment right now
on the Sidecursing podcast thatwe will be doing that within
the next few months.
That we will be doing thatwithin the next few months.
Amit, my question to you is,assuming this works exactly as
predicted, what happens to allof our beloved tools like
Beautifulai, midjourney, evenChatGPT?
(43:03):
Do you think we're entering anera where we need to be prepared
to potentially say goodbye tothese tools?
Amith (43:13):
You know this has
happened.
There's ebbs and flows in allof these historical waves of
software innovation where thingsstart off as standalone
products and apps and then theyreally ultimately become
features in some bigger platform.
That's happened for years andyears and years and you know
it's good to have an explosionof capabilities as all these
little apps but then ultimatelya lot of them, you know, kind of
(43:36):
get sucked into thecapabilities of what a
mainstream, broader tool isgoing to use.
So you know something likebeautifulai, I don't know.
You know you have a lot ofgreat tools that can do
PowerPoint style presentationsright and a lot of people have
tried to kill PowerPoint over alot of years and had a hard time
doing it because PowerPoint hasuser base.
(43:58):
It's kind of clunky in someways but it works.
And if PowerPoint is AI-powered, why would you go to Beautiful?
I used to use Beautiful andthen when I got Copilot and
PowerPoint I stopped usingBeautiful.
I mean, for me I was a littlebit less interested in some of
the design features they hadthere.
Like, I'm a very basicPowerPoint user in terms of the
design side, but for mePowerPoint, even with the first
(44:19):
version of Copilot, was plentyof horsepower to do what I
wanted.
So I don't know.
I think it is natural, though,to see some of these tools go
away.
I think the first casualtieswill be some of these AI note
takers that are out there, likeMeetGeek and products like that
Not necessarily that onespecifically, but you really
don't need them.
In Zoom and in Microsoft Teamsand in Slack, you have note
(44:40):
taking summarization actionitems, all that stuff built into
those apps.
Yet people keep using Readai,meetgeek, all these other tools,
which are also potentialcybersecurity issues, because
they're an extra tool that'skind of jumped into your most
sensitive Zoom meetings.
So I think some of those thingsare very obviously just
features and go away Likeactually, a good example early
(45:02):
in the chat GPT explosion wastalk to my PDF type apps.
There were a whole bunch ofapps that were like hey, talk to
a PDF because chatPT doesn'tallow you to upload a document.
Well, gee, really, you thinkthat's going to take too long
for ChatGPT to add.
It's ridiculous.
So, of course, that had utilityfor five minutes and then was
wiped out.
So there will be some of that,and I think that's ultimately
(45:23):
good for consumers, because thattype of competition is going to
drive more value creation inthe core tool set.
Now, as far as will ChatGPTitself go away?
I don't think that that'slikely.
I think that they specificallyhave enough of a user base that
they can add feature set toChatGPT.
That might make you ask thequestion does Google Docs go
(45:45):
away?
Does Microsoft 365 go away andthose capabilities become part
of a suite of tools that ChatGPTmakes available?
I could see that happening too,you know, because think about
what's happened with artifactsin Cloud and Canvas that you
just demonstrated in ChatGPT.
You add just a handful of morefeatures there for editing
documents and then you make theUI capable of showing you all
(46:06):
the documents you've createdoutside of.
So you take all the differentlike canvases that you've
created in chats but flip thataround and say, hey, there's a
document browser in ChatGPTwhere you can see all the
canvases or documents thatyou've created and then be able
to go back that way and to beable to share them, collaborate
in there.
It's not all that different toCopilot Pages or Word Documents
(46:27):
or Google Docs.
So you know, chatgpt has enoughmomentum where they stand a
chance of being able to drivetooling like that and they have
one really significant benefit,which is that no one expects
anything from them, so they canadd the simplest, easiest
features, whereas Microsoft Wordand Google Docs, to a large
extent now, are the incumbentsand they have a ridiculous
number of features.
(46:48):
So weaving in AI, you know, iskind of hard actually when you
have that much, what I'd call akind of legacy feature set, that
5% of the users use most ofthose features.
There's a million features inWord or whatever and most people
use 10 of them, and then theother features that are out
there are used by a tinyfraction of people.
Mallory (47:10):
I'm going to veer a
little bit because we have your
expertise as an entrepreneur,Amit, but you said history has
kind of repeated itself.
We've seen this before wherecompanies built around this one
feature go away because thatessentially becomes a feature in
a bigger platform orapplication.
So why is it then that we sawkind of the appearance of like
meeting note takers?
(47:30):
If we're going to use that asthe example in the first place,
if we knew kind of theappearance of like meeting note
takers, if we're going to usethat as the example in the first
place, if we knew kind of, in afew years that would be
rendered useless.
Amith (47:37):
Well, entrepreneurs are
always going to chase what the
next opportunity is.
Right, that's whatentrepreneurs do.
We're wired that way, and so welook at it and say, hey, where
is a missing piece of valuecreation or where is there an
efficiency, or where can we usea new technology or a new
platform to create value andthen monetize it essentially?
And so the idea of a meetingnote taker is amazing.
(48:00):
Ai meeting note takers thathave been out there for a little
while have built considerablebusinesses.
So there's an opportunity torapidly build a tool like that
and then sell it to someone whowill then weave it into, it'll
turn into a feature in theirproduct.
So that's oftentimes.
The playbook is to move quicklyenough where you as the
innovator, as an entrepreneur,can rapidly build something.
(48:22):
And then the idea is is thatit's likely that one of the
platform players will say, hey,let me just buy that thing and
make it a feature?
That's one possible playbook.
The other is to basically havethis ego big enough where you
think that somehow you can buckthat trend and become the next
platform, and that occasionallydoes happen.
It's a very low probabilityplay.
It's very unlikely for anyoneto win doing that.
(48:44):
And then there's otherstrategies that are out there,
like what a large part of mycareer has been focused on is
verticalization, where you takea capability and then you make
it hyper-specific to a narrowgo-to-market or a vertical, like
the association market or thenonprofit market or other
companies that I've beeninvolved in have been in other
verticals, whether it's home,healthcare or construction or a
(49:08):
number of other verticals whereI've been involved as a founder
or investor, and so I like thesehyper-specific, narrow markets
because that's where the domainexpertise, the relationships,
the market knowledge is actuallya really significant value add.
I mean it's why products likeAMSs exist, as much as people
don't like them.
They provide a level of valuecreation above a generic CRM
(49:30):
that associations largely need,and so that's a reason why a lot
of vertical products tend tohave an interesting opportunity.
But coming back to thehorizontal plays, like a meeting
note taker, they tend to bevery short-term opportunities
and you either catch thatopportunity early enough, get
enough market share and exit, oryou get crushed, and I think
most people who do that kind ofstuff know that, but they're
(49:52):
playing the odds.
Mallory (49:55):
That's very helpful.
I want to ask one more questionon Microsoft Copilot Wave 2,
and it is around agents.
So in the video that theyrolled out, it seems really
intuitive to build your ownagent.
They walk you through theprocess really quickly.
Anyone, essentially, will beable to build an agent this
month, I think is when they'rerolling this out in beta, and on
(50:16):
one hand, that's amazing and onthe other hand, for me
personally, I'm thinking well,what agent am I going to build?
I could build any agent I want.
Where do I start?
And so I know you often talkabout going to pain points.
I can think of a few that Ihave, but I just wanted to give
you an opportunity to speakabout, now that potentially
anyone can make an agent thismonth, how you would recommend
(50:38):
going about that.
Amith (50:40):
Well, I think seeking out
pain points which another way
to put that is looking forinefficiencies, looking for
things you do repetitively is agreat opportunity for agents,
and so the email use case, Ithink, is a great one.
Most associations get tons andtons of emails that are
basically the same thing overand over again, and whether or
not you have an FAQ in yourwebsite or not and most people
(51:03):
do they still get 50% of theiremail volume or 80% of their
email volume could be answeredby looking at the FAQ or looking
at the top 10 most commonlycited documents.
So there's definitely anopportunity for a knowledge
agent, like a Betty bot orothers like that, to be able to
be wired in to an email agentthat can respond.
In fact, the member junctionteam has exactly that in the
(51:25):
works, and at Digital Now we'regoing to be announcing service
agent capabilities where Bettywill be able to wire into your
email and automatically respondfor you, and that's similar to
what you can build on your ownusing something like co-pilot
agents.
The main difference is whetheryou want to have an enterprise
scale type of approach where youhave all your entire corpus of
(51:46):
content grounded truth in termsof responses.
That's important for aknowledge agent that's going to
answer questions, let's say,within the domain of your
association.
So back to our engineeringexamples.
If someone is just asking youknow what's the location for our
next annual meeting, basicallyany AI can answer that from a
couple of documents.
That's easy.
But if they're asking aquestion about electrical
(52:07):
engineering or about mechanicalengineering, you do not want to
get that wrong.
You don't want to use aconsumer grade tool for that
kind of a question.
So that's where it's more of anintegration play, where you
pull in like an enterprise gradeknowledge agent, like a Betty
or something along those lines,and then wire it into your email
infrastructure.
I think email is a great placeto start because you can improve
(52:28):
the speed of your responses andgive people better, more
comprehensive responses fasterand you can lower your workload.
So responses and give peoplebetter, more comprehensive
responses faster and you canlower your workload, so you know
you win on both sides of itBetter.
Mallory (52:40):
Member service and
lower internal cost.
Awesome, that's very helpful.
I'm excited to see Wave 2 ofCopilot to make a course on it
and, hopefully, for it to liveup to that initial video that
they showed.
It's going to be very exciting,yep.
Thanks so much for joiningtoday everyone, our viewers on
YouTube, our listeners on allmajor podcasting platforms.
We will see you next week forepisode 52.
Amith (53:05):
Thanks for tuning into
Sidecar Sync this week.
Looking to dive deeper?
Download your free copy of ournew book Ascend Unlocking the
Power of AI for Associations atascendbookorg.
It's packed with insights topower your association's journey
with AI.
And remember, sidecar is herewith more resources, from
webinars to boot camps, to helpyou stay ahead in the
(53:27):
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.