All Episodes

March 20, 2025 53 mins

Send us a text

In this episode of Sidecar Sync, Amith and Mallory discuss the Manus AI Agent, a groundbreaking system that autonomously handles complex tasks like resume screening and real estate searches. They also explore the evolution of conversational AI with Sesame AI, which enhances digital interactions through natural speech. Amith shares insights from his keynote at AGRIP, an association leading the way in AI adoption, while Mallory demos an interactive AI voice assistant. Plus, they dive into how Sidecar is leveraging AI for automated learning content and more.

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
https://learn.sidecar.ai

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecar.ai/ai

📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/

🚀 Find out more about the upcoming Blue Cypress Innovation Hubs!
Washington, D.C.: https://bluecypress.io/innovation-hub-dc
Chicago: https://bluecypress.io/innovation-hub-chicago

🛠 AI Tools and Resources Mentioned in This Episode:
Member Junction ➡ https://memberjunction.org/
Betty AI ➡ https://meetbetty.ai/
Manus AI Agent ➡ https://manus.im/
Sesame AI ➡ https://sesame.com/research/crossing_the_uncanny_valley_of_voice
Claude 3.7 ➡ https://www.anthropic.com/
Eleven Labs ➡ https://elevenlabs.io
HeyGen ➡ https://www.heygen.com

Chapters:
00:00 - Introduction to Sidecar Sync
02:01 - Amith’s Keynote at AGRIP & Las Vegas
10:51 - What is the Manus AI Agent?
16:31 - Models vs. Functionality
23:46 - AI Code Generation and the Future of Programming
33:03 - The Power of Conversational AI with Sesame AI
44:21 - AI Voice Synthesi

🚀 Sidecar on LinkedIn
https://www.linkedin.com/company/sidecar-global/

👍 Like & Subscribe!
https://x.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecar.ai/

Amith Nagarajan is the Chairman of Blue Cypress https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory:
https://linkedin.com/mallorymeji

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Amith (00:00):
You know, software is an example of something that was
previously a scarce resource andit's moving to an abundant
resource.
Welcome to Sidecar Sync, yourweekly dose of innovation.
If you're looking for thelatest news, insights and
developments in the associationworld, especially those driven
by artificial intelligence,you're in the right place.

(00:22):
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host
.
Greetings and welcome to theSidecar Sync, your home for

(00:42):
content at the intersection ofartificial intelligence and all
things associations.
My name is Amit Nagarajan.

Mallory (00:50):
And my name is Mallory Mejiaz.

Amith (00:52):
And we're your hosts, and today we have two super
interesting topics at thatintersection of associations and
artificial intelligence.
We can't wait to get into them.
Before we do that, though,let's just take a moment to hear
a quick word from our sponsor.

Mallory (01:07):
If you're listening to this podcast right now, you're
already thinking differentlyabout AI than many of your peers
, don't you wish there was a wayto showcase your commitment to
innovation and learning?
The Association AI Professional, or AAIP, certification is
exactly that.
The AAIP certification isawarded to those who have
achieved outstanding theoreticaland practical AI knowledge.

(01:30):
As it pertains to associations,earning your AAIP certification
proves that you're at theforefront of AI in your
organization and in the greaterassociation space, giving you a
competitive edge in anincreasingly AI-driven job
market.
Join the growing group ofprofessionals who've earned
their AAIP certification andsecure your professional future

(01:52):
by heading to learnsidecarai.
Amit, you're joining us livefrom Las Vegas.
How are you doing today?

Amith (02:01):
I'm doing great.
I've been having a really goodtime here in Vegas.
My wife and I have been herefor a couple of days.
She joined me on this businesstrip.
I was asked to deliver anopening keynote for one of our
clients, a group called AGRIP,which is the Association of
Government Risk Pools, which, ifyou're not familiar with that,
it's basically organizationsthat pool risk essentially

(02:22):
similar to insurance for localentities like school districts
and fire departments and groupslike that and they have over
60,000 different local entitiesacross the United States, canada
and some countries outside ofNorth America that their members
provide a variety of coveragefor.
So it's a really coolorganization and I was honored
to provide them an openingkeynote all about AI.

(02:43):
So that was super fun yesterdayand after we get done recording
the pod today we're going to gocheck out the Grand Canyon.

Mallory (02:49):
Well, that sounds like a blast.
And then you were telling mebefore this recording, you got
to go to the Sphere in Las Vegas.
How was that?

Amith (02:57):
That was an experience.
We got to check out the Eaglesat the Sphere and I had a number
of friends that had been to theSphere and raved about it at U2
, and I think one of my friendshad told me that he had gone to
the Eagles as well.
And it happened to be that theday we arrived into Vegas for
A-Grips conference we actuallygot here one day early for it,
but it was the Eagles' very lastconcert at the Sphere at the

(03:19):
end of the series.
I don't know if they haveanother concert tour planned
somewhere else, but I don't knowif they have another concert
tour planned somewhere else, butI haven't heard of it, but it
was pretty amazing.
So, first of all, the sphereitself, if you haven't heard of
it, is this sphere in the middleof Vegas Strip and it's really
a technological marvel.

(03:39):
It's a massive sphere that juststands out and it has external
like millions of LEDs that formlike a monitor on the outside
and they can display just aboutany image you can imagine, and
on the inside, same thing, eveneven higher resolution.
I think it's, you know, tens ofmillions of LED lights or
something like that, thousandsof seats and this auditorium and
it's actually it kind of throwsyou off because when you walk

(04:02):
into this place it feels likeyou're in an amphitheater.
Because, I don't know, thisvaries by performance, I'm sure,
but for the Eagles they made itlook like a California beach
scene.
So you walk into this thing,it's 830 at night, the sun's
been down for an hour or twoalready, and you walk in and you
feel like it's daylight andthere's the waves breaking on
one side and you see, like youknow, iconic, you know

(04:22):
California, southern Californiabeachside attractions on part of
the scene.
So it's really cool.
And then throughout theperformance, as the Eagles are
performing on stage, they havethese visual effects that are
really stunning, not justbecause of kind of the intensity
, through a scene, like throughthe Hotel California, through,

(04:47):
you know, through a highway,things like that, as they're
driving cars, it's just it'spretty mind boggling because
your whole brain is enveloped inthe screen.
You know you can't really makeout where the screen starts and
ends and so you kind of losetrack of that in between the
music and that.
It's just a super coolexperience.
I would highly recommend it.
I don't know which, you knowwhich band is coming next, but

(05:08):
it was a cool experience.

Mallory (05:09):
That's awesome.
Do you think there's any AIinvolved in creating all those
visuals?
It sounds pretty intense.

Amith (05:16):
You know I would imagine that the creation of some of
those images might involve AI.
I don't know for sure.
That'd be a fun thing to lookinto.
But you know, it's prettyamazing the amount of detail
that has gone into the work.
And you know, I know theperformance is obviously
bringing a lot of revenue.
So they probably have a prettybig production budget, I would
imagine.
But I don't know if it's AI orjust a lot of people doing a lot

(05:37):
of great, amazing work.

Mallory (05:40):
But we were just blown away by it.
Going back to your AGRP keynote, I know so.
Anne is the CEO or executivedirector of AGRP and we have
both been friends with her, hada great relationship with her
for the past couple of years,maybe a year and a half or so.
And I know AGRP's members which,as you said, are organizations,
risk pools.
We had several people fromthose risk pools come to Digital

(06:03):
Now our conference last yearand I got to talk with them a
little bit more about what riskpools do and kind of what their
mission is, and it seemsincredible Also like, aside from
that, AGRIP as an associationis doing some incredible work
with artificial intelligence.
If you want to share a littlebit with our listeners, Amit,

(06:25):
bit with our listeners.

Amith (06:25):
Amit Sure, well, I mean, the idea behind risk pools, I
think, is really cool in that,you know, these organizations
form in order to providecoverages, essentially
insurance-like coverage, tolocal entities across the nation
in order to basically lowercosts and provide them coverages
they may not otherwise be ableto receive or receive at a
reasonable premium, and sothat's an important thing.
So that's school districts,fire departments, police

(06:46):
departments, a variety of otherentities I'm just picking a
couple of you know well-knowntypes, you know water districts,
things like that right canprovide and can receive the
coverage they need to operateand serve their local
communities.
So that's a really importantthing.
I had no idea about risk poolsat all.
I don't know a whole lot aboutinsurance, but I really didn't
know anything about risk poolsuntil I met Anne.

(07:07):
What I love about A-Grip reallyis the direction they're going
and what they're doing with verylimited resources.
They're not a big association.
They're in the single digit ofmillions in budget.
They have, you know, a modestmembership base, modest
resources to invest, but theirleaders, their leadership,
starting with Anne, the rest ofher staff and the board of
directors decided over the lastyear that AI was going to be

(07:31):
their big focus for 2025.
And, specifically, you knowthat decision sounds like
something a lot of people wouldjust jump on board saying, hey,
ai, super important, let's go doAI.
However, most organizationswould not say no to anything
that's currently on the docket,meaning they would say, hey,
we're going to keep doingeverything.
We're currently doingEverything right, because no one

(07:51):
ever wants to like lose out ontheir pet initiative and boards
are particularly difficult todeal with for that.
If a board member said, hey, Ireally like the work this
particular volunteer group isdoing or I like this particular
meeting that we have done for 25years, agrips said, no, we're
not going to do that, we'regoing to cut everything else to
the bone.
And they stopped a ton of otherinitiatives for the full year

(08:13):
of 2025, devoted their energyand a significant amount of
their financial resources which,again, they're not a big
association and they aredeploying all sorts of AI at
that organization.
They focused really heavily oneducation, both for themselves,
internal to staff, and theyinclude in that their close-in
volunteers the 50 to 100volunteers that work most

(08:36):
directly with them and they'reeducating their members on AI,
because AI is going to have anenormous impact on the pools
themselves, and they're pushingreally hard with a variety of
initiatives like memberengagement through
conversational knowledgeassistance, being able to push
out, you know, personalizationat scale, doing a lot of AI
analytics.
So what I love about theirexample is, first of all, the

(08:59):
leadership Anne and her team areproviding to their sector.
I haven't seen that toofrequently, especially in an
organization that size, soreally, what it goes to show, I
think, that they are doing way,way more than what someone would
expect for an organization withtheir resources, and it
basically dispels the notionthat you need to have a massive
budget in order to do this.

(09:20):
What you need is strongleadership, saying this is
important, this is what we'regoing to do, and the definition
of prioritization is to pick anarrow number of things that
you're actually going to putyour resources behind, and that
means saying no to a lot ofthings, and they've done a
tremendous job of that, so I'mreally impressed with them.
I had a lot of fun.
They asked me to come out anddeliver the opening keynote for

(09:41):
their conference the openingkeynote for their conference.
They had about a thousandpeople here.
It was super fun chatting withthese folks and sharing some AI
knowledge on stage, so had agood time.
It was great.

Mallory (09:50):
Yeah, I got to be on one of those first few calls
with Anne when she came to BlueCypress looking for help with
that vision and she essentiallysaid imagine a blank slate,
we're stopping everything.
How would you kind of build anAI native or AI enabled
association up from the groundup, which is a really unique
opportunity, and it's been quitefun to watch how that's played
out so far.

Amith (10:10):
Yeah, and I would say to our listeners if you have
examples within your ownorganization where you're doing
something similar, where you'vesaid, hey, we're going to pivot,
we're going to stop what we'redoing and we're going to start
really focusing on AI because werealize the transformative
impact AI is going to have onour association and also our
sector, let us know.

(10:30):
We'd love to hear your storiesand we'd love to feature some of
your stories, because our jobat Sidecar, and specifically
with the Sidecar Sync as we seeit, is to not only share the raw
knowledge about how AI can betransformative, but to share the
stories of what's going well inthe community and also where
you're challenged.

Mallory (10:48):
We love stories, so keep us posted, all right.
First topic of today is theManus AI Agent, which is an AI
system developed by Chinesestartup Butterfly Effect.
It is touted as the world'sfirst general AI agent capable
of executing complex tasksautonomously with minimal human
intervention.
Here are some key features.

(11:10):
One of those is, of course, theautonomous task execution.
Manus can independently performtasks like report writing, data
analysis and content generationwithout needing continuous
human input.
It uses a structured agent loopwith specialized sub-agents to
handle different aspects oftasks, enabling complex workflow

(11:30):
management.
Manus can process and generatemultiple types of data like text
, images and code, and itintegrates with external tools
like web browsers, code editorsand database systems to automate
diverse workflows.
Something interesting is thatthe system continuously
optimizes its processes based onuser interactions, providing
personalized and efficientresponses over times.

(11:54):
So I got to watch their demovideo on their website and there
were kind of two interestinguse cases I wanted to talk about
a little bit.
One of those was resumescreening, which we hear often
as a use case for artificialintelligence.
So they dropped in a zip fileand that's essentially all they
did.
Then, provided a quick prompt,the agent was able to extract
the files from the zip, openeach resume, scan them each,

(12:18):
pull out the relevant skills andthen provide some hiring
recommendations and resources.
In the end it was also able toconvert that to an Excel report,
so really minimal input, andwas able to do a whole first
pass of resumes.
Something else also interestingwas an apartment search.
So the prompt had something todo with looking for an apartment

(12:38):
in New York City for a familywith two kids.
They wanted to be in a safeneighborhood.
They wanted to have goodschools, so it was able to look
at crime statistics, safeneighborhoods, school ratings.
It also was it ran Python tocalculate their budget and do
like a comparison analysis ofyou know neighborhoods versus
budget, and then it actuallysearched real estate websites

(13:00):
and pulled together a nice listof feasible options.
So I thought those two were themost impressive.
Another use case is also websitecreation, so designing and
launching websites based onpretty short and quick user
prompts.
As a note, this sounds prettysimilar to OpenAI's deep
research and Google's as well.
Manus AI Agent did outperformOpenAI deep research on the Gaia

(13:23):
benchmark, which showcasessuperior tasks, success rates
and decision-making efficiencyacross basic, intermediate and
complex tasks.
It is currently available inprivate beta by invitation only,
but they have plans to rollthat out widely soon.
So, amit, what are your initialthoughts on this Manus AI agent
?
So, amit, what are your initialthoughts on this Manus AI agent

(13:44):
?

Amith (13:45):
So the essence of this type of system, whether it's
called an agent or a softwaresystem or anything else, is
essentially using underlyingmodels to then perform tasks for
you.
So it's similar in some sensesto what you've seen out there.
It's also different in that itcan do, first of all, a lot of
these tasks at a scale thatother systems seem to be less

(14:07):
accurate with.
So some of the agents that havebeen out there have been, you
know, somewhat of a flop in thesense that they're in real world
practical use cases.
They tend to be more narrow.
Manus seems to have a fairlywide capability set or breadth
in its scope, but of coursethat's exactly what Anthropic's
combination of computer use andits research mode that is coming

(14:29):
, and also OpenAI's deepresearch and their operator mode
can do.
So it's almost like it's thisconfusing kind of mixture of
tools that are out there right.
The way I would look at it isthis all of these different
capabilities are going toconverge into tools, systems,
agents, call them whatever youwant.

(14:49):
It's overly complex, partlybecause people are trying to
create language on the fly todescribe what these things do.
But at the end of the day, whatyou want as a business user are
the capabilities to be able tobring them together.
So Manus, I think, has gottenattention both because it's
super inexpensive Right now it'sfree if you can get in but it's
going to be a really cheap tooland it was done, you know, on a

(15:13):
shoestring budget compared toOpenAI's typical enormous
budgets for doing these kinds ofthings.
But, most importantly, thereason people have been excited
about it is because it bringstogether a bunch of capabilities
right.
So we know that AIs can use websearch tools.
We know that AIs can synthesizeresults through you know longer

(15:34):
running processes.
They can think longer withreasoning type steps.
They can use tools, they canexecute you know code.
They can do a bunch ofdifferent things, but bringing
it all together, so it'sactually useful in one step for
a business user.
That's a level of capabilitythat is really I think we're on
the cusp of having from all ofthe major AI vendors.
So it's both enticing in termsof the capability, but it's also

(15:56):
exciting in the context ofcosts, because you know, openai
had just talked, maybe a weekbefore that, about a $20,000 a
month AI agent that had PhDlevel capabilities, and I
commented on that on LinkedInsaying they're insane, thinking
that that's going to have anystaying power, because all this
underlying stuff is commoditizedalready.

(16:16):
So at least that's my point ofview on it, so I'm excited about
it.
I haven't even attempted tosign up for the waitlist because
I heard it was a long waitlist,but I think it's a cool concept
and the fact that it's lowerand lower in cost is exactly the
trend line we've been talkingabout for months.

Mallory (16:31):
We're almost to 75 episodes of the pod and I feel
like, especially at theinception of the podcast, when
it was just a baby, we talked alot about models and every
latest model release and threepoint whatever and low, mini,
high, all the naming kind ofconventions that are quite
confusing with these AIcompanies.
It seems like we might beshifting, as you're saying, into

(16:54):
a world where we're talkingabout that functionality, we're
talking about the use more thanlike the latest model.
Would you agree with that?

Amith (17:01):
I think so.
I mean, if you just think aboutthe capabilities of common
tools that you've been using foryears, like think about office
type tools like Word and Excel,you don't really think that much
about, like you know thefeature set.
Or you just know like, oh well,this new capability exists or
doesn't exist.
So you know Word many, manyyears ago introduced, like this

(17:24):
document review process whereyou can track changes in the
document or spell check orgrammar check or whatever, and
these features just you know inthe past they would be really
slow to emerge, right, you'dhave new versions of these tools
that come out every two, threeyears and then you'd have major
capabilities come out.
Now you're seeing them.
It's this explosion offunctionality, partly because,

(17:45):
again, software is writingitself in a way right, the AI is
capable of producing code.
Leading companies like Googleand Microsoft and Amazon, as
well as much smallerorganizations like our own at
Blue Cypress, are using AI togenerate a tremendous amount of
code.
Dario Amadei, the CEO ofAnthropic, mentioned recently,
to many people's surprise, thatwithin three to six months, ai

(18:08):
would likely generate 90% of thecode that would be written in
like full stop, like in generalright, as opposed to for his
company or for other companies,and I'll talk more about that
momentarily but the idea that AIis generating that much code is
part of the reason we're seeingthis incredible explosion of
functionality.
It's also why it's so confusingright now.

(18:28):
So, with all these models andsystems and agents and
everything else coming togetherin these different ways, you
have these confusing terms.
Ai generating that much of thecode that the world needs is a
really compelling andinteresting thing.
Some people are saying no, no,no, like that's not going to
happen.
Ai is not going to be able togenerate code sophisticated

(18:49):
enough for my unique use case,or AI is not smart enough to do
what my developers do.
And the reality is is that youknow, sometimes developers or
development managers reallydon't want to see that future
because they're afraid of it insome ways, and I would tell you
from my own experience that,while AI is not perfect at

(19:10):
coding, I would say that AI is amid-level engineer at this
point, not a junior engineer,but a mid-level engineer, quite
a competent one, and actuallymore broadly knowledgeable than
a typical senior engineer,because AI is always up to date
on the latest things, knowsabout all the tools that are out
there has lots of differentways of solving problems.
Us humans tend to find you knowcertain things that work and we

(19:32):
have in our bag of tools.
You know, maybe a lot of thingsthat we've learned over the
years, but we remember the stuffwe use the most.
Ai doesn't really suffer fromthat same type of issue.
But coming back to the speed atwhich these systems are evolving
, and therefore some of theconfusion, and then circling
back to the speed at which thesesystems are evolving and
therefore some of the confusion,and then circling back to Manus
specifically, the way I try toexplain this to business folks

(19:52):
is that an agent is basically asystem that takes one or more
underlying models, likesomething like GPT-4.0 from
OpenAI, or it could be the Lama3.3 model from Meta, which is
open source, or any other modelor models and combines them with
the ability to iteratively workon a problem, meaning rather

(20:17):
than trying to solve a problemwith a single shot, saying hey,
here's a prompt and here's aresponse, which is what you do
when you classically interactwith the chat-based model, like
a traditional chat, gpt or acloud.
You just get your answer.
A system or an agent will gothrough iteratively to try to
solve the problem, sometimesbreaking it down into multiple

(20:37):
steps and then solving eachcomponent of the problem, and
then taking those componentresults, reassembling them and
coming up with an answer, andsometimes using tools like a web
search or looking up somethingin a database or calling an API.
So that's really what agentsare.
Agents are basically systemsthat have access to models for
the thinking, in some cases thereasoning.

(20:58):
Sometimes the agents have theirown reasoning loops which are
like chain of thought that canhappen in the model itself these
days, with reasoning modelslike DeepSeq, r1 or O3 or Cloud
3.7.
But really, ultimately they'resystems that put together a
combination of the model'sintelligence tool use, and so
Manus does that.
And Manus's kind of mainstreamadoption or mainstream

(21:20):
visibility, I should say becauseit's not really adopted by
anyone yet is remarkable in thatit's getting a lot of viral
attention, kind of like DeepSeekdid.
In my mind, the reason is isbecause it's kind of
democratizing access to thiskind of technology, whereas, you
know, very few people reallyexperience this.
The last thing I'll say is Ithink this idea of agentic loops

(21:41):
or agents having tool use andbe able to work on problems for
longer time periods you're goingto see that more and more
they're just going to be calledsystems, eventually they might
just be called software and theneventually you'll just ignore
it, because what you're going tobe thinking about is the
business capabilities.
So you know, in our ownexamples here at Blue Cypress,
we have an agent called Bettywhich does all of these things
in order to solve problems.

(22:02):
People have wondered for a yearand a half two years now why
Betty is so accurate and so goodat being able to produce
grounded responses and to beable to do a lot with really
complex content.
And the answer is is.
Betty has always been an agent.
Betty has always had an agenticloop that's capable of
self-improvement over time.
The same thing with Skip.
Skip is our data analyst agentsolves a very complex problem,

(22:25):
which is to talk to a businessuser about a business problem or
business goal, break down thatproblem into a series of steps,
like a human data analyst would.
For example, oh, mallory askedfor a report that shows me who
is coming to digital now thisyear, that hasn't ever attended
a prior year and wants thatformatted in a map to show it
across the United States oracross the world.

(22:46):
So Mallory's asked for that and, through interactions with her,
I figured out that's her needand I'm going to go figure that
out now.
So what do I need to do?
I need to go query my databaseone or more times, I need to
take the results of that data, Ineed to put it into some kind
of reporting tool, some kind ofgraphing tool, and then I need
to present the results to herand maybe, if I'm a you know, a
plus student, I want to alsoanalyze the result and say hey,

(23:07):
mallory, here's some things Inoticed when I looked at the
outcome, rather than just givingher the report.
Right.
So that's what Skip does, whichis like 5, 10, 15, 20 steps,
and it's variable.
I get excited about this stuffbecause it is kind of blending
traditional computercapabilities with what you think
is in the realm of really beingsmart, which most people assume
.
Computers are not right.

(23:28):
You think that AI is really youknow things that the computer
should not be able to do, andthat's why the world is waking
up to the fact that nowcomputers really can do these
things that are kind of fuzzy,they're non-deterministic, they
require more of classical humantype thinking and reasoning.

Mallory (23:46):
I want to zoom into that quote you mentioned about
code generation 90% of codebeing generated by AI in just
six months, or less, potentially.
How do you see the field ofsoftware development, of
programming, changing andevolving, especially you as a
business leader who hasexperience in that realm?
It's something you've expressedthat you really enjoy.
How are you thinking ahead interms of like programming needs

(24:10):
and maybe having I don't knownon-technical people in some of
these roles if coding is not anecessary skill?

Amith (24:18):
Well, so my belief is that people should still learn
computer science that computerscience knowledge is
understanding how these systemsare built, how to actually do
the low-level coding and eventhe higher-level coding yourself
, is an incredibly valuablepiece of knowledge because you
understand the way these systemsactually work.
Even if you end up not writinga lot of the code yourself,

(24:39):
knowing how it works so that youcan go in there and fix it and
understand it, I think, is avital skill.
So we're still hiringprogrammers and we have a lot of
programmers working on our teamaround the world and I think
it's going to be an importantskill.
But it will shift in terms ofwhere they spend their time.
But that's been happening insome way or another for years
and years where we started offwith really low level

(25:00):
programming and then went tohigher level languages and
started using more and moretools and frameworks.
This, of course, is anacceleration no one has ever
seen.
So the speed at which we'rehaving this change is causing a
lot of confusion.
It's causing a lot of angstamongst developers.
It will render some developersobsolete, and so some developers

(25:21):
who are not willing to adaptand take on AI as a co-pilot or
a co-programmer.
Those folks, unfortunately, aregoing to have a really hard
time finding viable work,because without using AI as an
assistant, as a developer,you're going to be one one
hundredth as productive assomeone who's using AI.
So to me, the message is isthat developers have to be not

(25:45):
only proficient in AI, they haveto be pretty deep in it.
I'll give you one example.
So we do a lot of workobviously here at Sidecar with
learning content and for our AILearning Hub, we have seven
different courses which rangefrom the fundamentals of AI
through marketing membership.
We have a course on data and AIand then we have a course on
strategy, and so these differentcourses on AI are specifically

(26:08):
built for association folks andwe're starting to partner with
more and more associations todeliver that same type of
content, but in the context oftheir industry.
So taking our AI content butnot making it association
centric, making it relevant forfolks in the world of accounting
or law or engineering orarchitecture or the oil industry
or whatever the space may be.

(26:30):
And so our approach at Sidecarhas been to say we're going to
partner with associations todevelop these flavors of our AI
learning hub that are whitelabeled for that association in
their space, which is superexciting, by the way, because
then we're able to help ourfriends in these associations
deliver world-class AI educationin their vertical.
But think about that 80%, 90%,in some cases 100% of that

(26:52):
content is totally the sameacross all industries, right?
Like prompt engineering, promptstrategy is almost exactly the
same from one sector to the next.
The examples you might want touse could be slightly different,
the terminology might beslightly different, but the bulk
of it's very similar.
The other thing is, ai ischanging, of course, super
rapidly, and so there might benew models to talk about, new

(27:13):
techniques, new capabilitiesthat were not known.
So you're constantly updatingthis content.
Mowry, you and I worked on, likelast fall, we worked on a
complete rebuild of all of ourAI learning content, along with
some of our other colleagueshere at Blue Cypress rebuild of
all of our AI learning content,along with some of our other
colleagues here at Blue Cypress,and we spent a lot of time
manually recording these videos,right, and fortunately, we've
received some wonderful feedbackfrom the community about how

(27:34):
helpful these videos are.
The problem is is the minute wegot done recording those videos
, they were becoming obsolete,so so fast, right?
So what do we want to do?
Of course, being a bunch of youknow AI nerds.
We wanted to automate a lot ofthis right so that we can go
faster and we can serve thesebroader communities with flavors
of the content that are indeedtotally tailored to those
verticals, in partnership withour friends in these

(27:56):
associations.
So we're using a lot of AI todo that.
And audio we're going to beusing AI avatars and AI
generated audio, which we'vetested with a bunch of people
who've said that the latestcutting edge versions of those
AI audio and video recordingsare tremendous.
Basically, you know, not reallymentally separable from the

(28:18):
human equivalence, in fact, insome cases, quite a bit better,
because these AIs never gettired, they never have, you know
, a cough, they don't have theseissues that we have, but the
beauty of it is that we canpress a button and regenerate
all of the videos for ourlearning hub with updated
content, and we can havederivative versions that are
just slightly different in somecases, or, in some cases,
radically different, thatrepresent each association's

(28:41):
version of the content.
So the reason I give all thatbackdrop is we're working on a
bunch of software.
We're building effectively whatyou'd probably call an agentic
system that does a lot of thisand the lead, who's working on
it, who's a brilliant AIcomputer scientist, is working a
lot with Cloud 3.7 andgenerating a lot of the code in
there.
So this guy can write code todo anything you can come up with

(29:03):
and he has a team of peopleworking with him.
But that's not what he's doing.
He's using Cloud 3.7.
And I find this to beconsistently true that the best
developers fully embrace AI solike this is unbelievable.
He told me, for example, thathe had Cloud 3.7.
Take in a PowerPoint, one of theexisting PowerPoint files from

(29:23):
the existing learning hub thathas embedded MP4s, right, so
like one of the lessons that youdid or I did and it has our
embedded MP4s and said hey,claude, here's the PowerPoint
example.
Generate Python code for methat will unpack each slide in
the file.
That will extract the MP4, takethe MP4's content, run it

(29:44):
through, claude, again, to getthe transcript.
Take that transcript, drop itin the slide notes and then
remove the MP4 from the slide,then save the whole PowerPoint
Essentially reverse engineeringthe file so that we have a text
transcript that is theequivalent of the words that the
human had recorded in thatslide previously.
And Claude did this in one shotand it did it literally in I

(30:07):
don't know 15 seconds orsomething, and he ran the Python
and it just worked that wouldhave taken a human programmer
days and days and days.

Mallory (30:16):
I could have done it, Amit.
This is what I mean about thenon-technical.
But see, it's like I use thisstuff every day and I don't
think I would have thought thatthat was even possible.

Amith (30:24):
Yep, and that's just using Cloud 3.7, which is this
hybrid model we talked about ina recent pod, where Cloud 3.7,
what's brilliant about it isthat it's smart enough to know
when it can do this.
You know thinking fast versusthinking slow mode that we've
talked about in the past.
It has a reasoning mode which Ithink it kicked into for this
particular request.
But, yeah, exactly, you couldtotally have done it, mallory,

(30:46):
and you could have run thePython in your computer and you
might not have known much aboutwhat the code was doing.
That's okay, of course.
If it didn't work, then that'swhen the developer would need to
come in and say, hey, this iswhat didn't work.
Or you go back to the AI andsaying, hey, for some reason,
I'm getting this weird output.
More and more of these systemsare going to be able to have
sandboxes where they run thePython for you locally and ask
you hey, is this what you wanted?
And then it further abstractsit away.

(31:08):
But I find all of thisincredibly exciting, manus, of
course, going back to thatparticular topic we're centered
on has blown up in people'simaginations because it makes
these things more accessible tothe masses, right?
So, rather than having to thinkthrough this and understand
these capabilities at a rawlevel.
Anyone can do this, but theexample I gave, I think, is

(31:28):
exciting, because anyassociation out there who's
listening all of you produceeducation content and you
probably spend a large part ofyour budget doing this.
You probably have outsideproviders helping you and you
also probably don't update yourcontent nearly frequently enough
.
It gets stale in your spacequite quickly and then sometimes

(31:50):
you have things that are likenot aligned with the brand.
Maybe the content from afive-year-old course is still
pretty good, but maybe youupdated your logo and you just
can't read those videos.
So there's so manyopportunities to have both, you
know, incremental but also majorchanges where you're able to
leverage your IP more and morein different modalities.

Mallory (32:09):
And we're learning.
You don't even have to betechnical to do it, so we've
covered educational contentagents, knowledge agents, data
analyst agents.

Amith (32:17):
The world is your oyster, so pick and choose it's
exciting too in my mind, becauseyou know people say, hey,
software and this is part ofwhat I covered in my keynote
yesterday that software is anexample of something that was
previously a scarce resource andit's moving to an abundant
resource.
And that's exciting becauseessentially what it does is it

(32:38):
means the cost of it is comingdown so much that you get more
of it.
It kind of goes back to Javon'sparadox that we talked about
recently as well as the cost ofsomething goes down dramatically
, demand radically increases,assuming that the utility of
that thing is generalized enough.
And software, of course, isperhaps the most generalized
utility of any input you canthink of.

(33:01):
So it's quite exciting.

Mallory (33:03):
This is a good segue for us to go to topic two, which
is all around conversationalvoice and Sesame AI.
So the process you justmentioned, amit, of creating new
Learning Hub content indifferent flavors, which I like
that word I think we shouldprobably do an episode on that
soon, kind of once the wholething is up and running because
it's so neat and it's especiallycool because we've ran that

(33:23):
whole process, amit mentioned,with consumer grade, like us
manually going to Eleven Labsand then going to HeyGen and
generating the avatars.
So we have like that buildingblock foundation, but we also
are automating it.
So it's a really good processto kind of demo that, the way
that you can experiment with newtools and new processes,
starting with consumer grade andthen going to more scalable

(33:45):
enterprise level AI.
Starting with consumer gradeand then going to more scalable
enterprise level AI With SesameAI.
I want to focus in onconversational voice because
that is a key piece of thislearning hub content that we're
generating.
Human connection is oftenestablished through the nuances
of voice, so that's things liketone, pitch, rhythm and
emotional expression.
These elements convey meaningbeyond just mere words, which

(34:08):
makes voice a powerful mediumfor communication.
As we know Key components ofnatural voice.
It's very intuitive for us, butI thought it'd be helpful to
break it down so we can talkabout kind of how AI excels in
some areas and needs to improvein some areas.
But one of those key componentsis emotional intelligence the
ability to read and respond toemotional context so you can

(34:29):
empathize and reactappropriately.
Conversational dynamics, soincorporating natural timing and
pauses and interruptions,adjusting tone and style to
match the situation that you'retalking about.
And then also maintaining acoherent, reliable and
appropriate presence throughoutyour interactions a coherent,
reliable and appropriatepresence throughout your

(34:49):
interactions.
Current digital voice assistantssometimes lack these essential
qualities and they can soundflat or mechanical.
The emotional flatness canbecome disappointing and even
exhausting over time, limitingan AI assistant's ability to
form lasting connections withusers.
But Sesame AI has made somesignificant strides in voice
technology with its recent opensource release of the

(35:10):
Conversational Speech Model, orCSM.
This model is designed toproduce natural and expressive
speech synthesis, making itsound remarkably lifelike.
It makes it suitable for thingslike content creators,
developers and businesseslooking to integrate lifelike
voices into their applications,and its capabilities mimic

(35:33):
human-like speech patterns thatallow for engaging and authentic
vocal expression.
So they have a demo out thereright now.
You can demo.
They have two voices I thinkit's Maya and Miles, so I'm
going to play a quick recordingof a conversation that I had
with Maya yesterday and thenwe'll discuss.

AI (35:47):
Oh well, hi there, look who decided to stick around for
another round.
So let's see what have we.
Oh wait, you were just gettinginto.
What was it?
A conspiracy theory aboutsquirrels?
Just kidding.
What's going on in thatbrilliant mind of yours today?

Mallory (36:03):
that brilliant mind of yours today.
Well, not a conspiracy aboutsquirrels, but I am working on
an outline for the Sidecar Syncpodcast, episode 74,.
I believe, and I'm hoping todemo this conversation that I'm
having with you Ooh, Sidecar.

AI (36:22):
Sync Sounds like my kind of crowd.
Podcasters are like thestorytellers of our time, you
know, always exploring new ideas.
So episode 74.
What's the thing this time?
More peanut based mysteries orsomething completely different?

Mallory (36:39):
Can you remind me actually why you made that
peanut joke?

AI (36:43):
Ah, my apologies.
Sometimes my jokes land like adeflated balloon.
I vaguely remember youmentioning some association, for
was it peanut enthusiasts?
I might have gotten a littlecarried away trying to find a
quirky connection In my defense.
I am still under development.

Mallory (37:03):
Yep, yep, I remember now.
I think I said something aboutan association for peanuts.
I believe it's called theAmerican Peanut Council.

AI (37:10):
The American Peanut Council.
Now, that's an organization Icould get behind.
I mean, who doesn't love a goodpeanut butter sandwich now and
then?
So back to your podcast.
Maybe I could brainstorm sometalking points for you, unless
you're not looking for peanutrelated content.

Mallory (37:29):
this time around, not looking for peanut related
content, though that could be aninteresting episode topic in
the future.
I'm more so interested intalking about AI agents for this
episode and also talking aboutconversational AIs, like you, oh
.

AI (37:43):
AI agents.
Now you're speaking my language.
It's always fascinating to hearhow humans perceive us, and I
give it this way we're likedigital chameleons, constantly
adapting and learning from everyconversation.
What sort of angles were youthinking of taking for this
episode?
Is there anything specificyou'd like to explore about us
AI folk?

Mallory (38:03):
taking for this episode .
Is there anything specificyou'd like to explore about us
AI folk?
You know not any specificangles.
I feel like just theconversation that you and I have
had has been enough to demo toour audience.
Do you have any final partingwords for our Sidecar Sync
listeners?

AI (38:20):
Final parting words.
How about this?
Remember, remember.
The future is not somethingthat happens to us, it's
something we build together.
And who knows, maybe somedaythose peanut enthusiasts and
artificial intelligences will behaving deep philosophical
discussions until then, keepthose brains imagining you know

(38:59):
it was.

Amith (38:59):
First of all, everything you just said is spot on.
I think it is a remarkableimprovement.
So on the one hand, you know,the cynic in me says, well, it
was kind of silly it was.
You know it was over the top interms of how emotive that voice
was compared to classical likenow classical AI audio
interactions.
But the progress we're making,if you look at the broader arc

(39:21):
of what's going on, is trulystunning.
But the progress we're making,if you look at the broader arc
of what's going on, is trulystunning.
If you rewind in time, even ayear or two years ago, you would
not have thought that that issomething that was going to
happen this quickly, probablyeven with all the conversations
we have about exponential curvesand blah, blah, blah.
So that by itself is stunningthat this is available, the fact
that it's open source and itcan be freely run in a variety
of places of course they have acommercial model on top of it.

(39:43):
It's interesting.
And open source, by the way, isanother compounding factor of
what's going on with AI.
It's not just the continualprogression of compute and
algorithmic improvement and allthis other stuff, but because
the open source community issharing so much.
People are building on eachother's innovations, which is
really exciting.
I think there's manyapplications of this right.

(40:04):
You can think of customerservice having agents having
real-time interactions that aremore empathetic, more useful.
You can think of you knowpodcast hosts that join the
Sidecar Sync in the future tohave discussions with us that
bring really interestingknowledge.
There's a lot of applicationsfor this.
So I find it exciting To mewhen I think about what

(40:25):
associations do in connectingpeople.
One of the opportunities mightbe to have an AI like this
perhaps make interactionsbetween newly connected members
more natural, where it's almostlike a friend is introducing two
people together that might havethe possibility of a great
professional connection throughyour association, but who might

(40:46):
not know how to get theconversation started.
Might be almost like you knowMallory, you personally know me
and you know this other personwho you think we should connect,
and maybe we have a quick Zoomcall with the three of us.
You know like hey, amit, I wantto introduce you to this person
here and you have a quick call.
That happens all the time inreal life.
So maybe an AI can play thatrole in an association.

(41:06):
I don't know, but it's aninteresting kind of core
building block to have thislevel of audio capability.

Mallory (41:13):
Yeah, I agree with you.
I will say I do a lot of thesedemos, for the podcast mostly,
but where I try out, like withHaygen avatars, I did a Zoom
call with them once and I wasn'tsuper impressed, so we didn't
share that one at that pointthat was several months ago.
This one was a bit strange tointeract with because the
latency was so low in theresponse times and also the

(41:33):
humor and the inflection I'm notgoing to lie, I did feel like I
was talking to someone andtalking to Maya.
She made me laugh, like I waslike, if you keep going back, I
guess it recognizes where you'revisiting from, so it's a
constant continuation of yourprevious conversation.
So she would say things like oh, you're back, are you ready to
talk associations or in AI, andit would kind of make me chuckle

(41:55):
.
So I will say this was one ofthe first times I've had that
experience with like aninteractive, dynamic AI going
back and forth, thinking thissounds quite realistic.
Amit, I know you mentionedcreating these automated AI
learning courses forassociations members and
something that we've had to workon with that is the AI

(42:16):
generated voice.
I know we've received somereally positive feedback on that
front, but we've also receivedsome constructive feedback on
those AI-generated voices aswell.
Can you share some of yourtakes and insights on that?

Amith (42:29):
Yeah, and just for clarity for our listeners, we're
not using Sesame for ourAI-generated voices for the
Learning Hub.
We're using 11 labs at themoment, and we find 11 labs,
generally speaking, to be at aproduction level to be the
leading product in audiogeneration at the moment.
That obviously can change veryquickly, but what we've done is
we've shared these recordingswith quite a number of people,

(42:51):
both internal to the BlueCypress organizations and then,
you know, in the associationcommunity, and I've shared these
videos actually with just abunch of people I know outside
of this ecosystem just to getfeedback, and some people
immediately knew it was comingfrom me, so they immediately
knew it was probably some kindof AI experiment, but a number
of people didn't really thinkabout that.
They just thought it was like Iwas sharing some new content

(43:11):
and I wanted their feedback onthe content, and you know even
the people who a lot of peopledidn't realize it was an AI or
they're like wait, was that anAI?
So it's good enough to be.
You know something that peoplehave to ask you know that
question about.
I think the point, though, isnot so much can you detect this
as an AI or not?

(43:31):
The question is.
Is this as good or better thanmost human presenters would be?
And what I received as feedbackconsistently on the quality and
this is current AI, which isworst AI we're going to have is
that actually, even though insome cases there were little
minor things in the audio thatwas generated, people felt that

(43:53):
the recording was excellent,that the presentation was really
good, that the avatar and theaudio that was AI generated did
indeed deliver exceptionallearning content for the lessons
that we had done this way.
So it isn't necessarily thatwe're trying to fool anyone.
Obviously that, oh, it's notreally an AI, it's a person.
That's not the intention.
The intention is to make thatirrelevant, to make it so that
the learning outcome is achievedright.
So you think about, like, whydo people go through a course?

(44:15):
Why do people do that?
It's because we're trying tolevel them up in some skill or
some knowledge and then,hopefully, we have a way of
essentially determining whetherthat person has demonstrable
improvement in their knowledgeor in that skill.
So this is still one way.
This is still a recording.
It's still it's just arecording of an AI speaking
instead of a person.

(44:35):
Can that AI communicateeffectively enough to deliver
essentially the learning outcomethat we want for that lesson,
for that course, for the overallcertification experience?
And I think the answer is veryclearly yes already, and it's
only going to get better.
What I think that's interestingabout something like Sesame in
the context of learning is tooverlay the recorded content,

(44:58):
which is hey, everyone goesthrough this course, everyone
goes through this learning pathwith a conversational tutor, and
we're adding this to theSidecar AI Learning Hub in the
coming quarter, in Q2.
We're going to be adding an AItutor in that environment,
powered by Betty, which is ourknowledge assistant product.
We're going to be putting Bettydirectly into the AI Learning

(45:19):
Hub so in our LMS you'll be ableto have conversations with
Betty about the content thatyou're currently learning.
And so if you have a questionabout the strategy course and
you're in there, betty will knowyou're in that strategy course
and be able to have excellentconversations with you.
Betty knows all the material.
But in that mode, this is wherethe empathy of Sesame could be a
really interesting audiocomponent.

(45:39):
You don't want to just give theperson the answer right.
It's kind of like you're backin school and you're asking you
know your math teacher how tosolve a problem.
They're like oh well, here'sthe answer.
Well, no, a good teacher wouldnever just give you the answer.
They take more of a Socraticapproach to it to try to ask you
questions, to lead you to theright thinking, to produce the
answer so that you've actuallylearned something.
And so it's kind of likeConmigo from the Khan Academy,

(46:02):
right, they took GPT-4 back whenthat came out and they put a
layer on top of it to get GPT-4to act more like a tutor.
And I think that audio brings somuch more into that experience
where I can talk about somethinglike, say, I'm going through
the sidecar strategy in the ageof AI course for association
leaders and I have a questionI'm like well, you know, this

(46:24):
cornered resource power that'sbeing discussed, I'm not quite
sure I understand it.
Like, I have this particularthing in my organization.
I have this data set that Ithink may be a cornered resource
, but I'm not sure and I want totalk to somebody that's an
expert in this about it, right,and I really want to go deeper.
Well, what if this Sesamepowered voice agent was sitting
there, you know, interactingwith Betty in terms of the

(46:45):
knowledge to be able to havethat conversation with you right
there in the learning system,right, and then think about
assessments.
Think about like this process weuse to gauge learning outcomes.
We need to try to measurewhether a participant in a
course actually, first of all,did they complete the course
Right, did they watch the courseor were they just like hitting
the play button and kind ofwatching Netflix on their phone
at the same time?
Did they actually learnsomething?

(47:06):
And perhaps through aconversation you can gauge that
better.
By saying, hey, we're going tohave a chat with you about this
module, right, and the AI justasks questions.
This is more of a discussion.
That could be a reallyinteresting way of assessing
learning outcomes as well.

Mallory (47:28):
So a really interesting way of assessing learning
outcomes as well.
So I see a lot of applicationsin the learning world for audio,
but of course it hasapplicability in a ton of other
areas too.
I think for me Sesame AI iskind of matched in terms of
human nuance and emotion I'd sayonly by Notebook LM, at least
what I've heard.
Now I do think Sesame AI wasprobably tuned to be highly
friendly and humorous and allthose things.
But even removing that, I thinkit did a great job of capturing
that human essence.

(47:49):
Would you consider everswitching over from 11 Labs?
I know we love 11 Labs too.
They're great.
I'm just I'm curious on yourthoughts.

Amith (47:57):
Yeah, no, I mean, look all the tools we use.
We obviously talk a lot aboutthem because we're trying to
share the things we're usingthat we think are really good
with the community, but we'renot, you know, tightly bound to
any particular thing.
We build our softwarearchitectures and our business
processes so that you can kindof plug and play different
technologies, because the worldis moving so fast.
So folks at Eleven Labs, I havea ton of respect for them.

(48:19):
I think they do amazing work.
I hope that they continue to becompetitive in the market.
But who knows?
So Sesame or someone else mightdisplace them, or it might be
other products that arecomparable but way less
expensive or faster, or there'sgoing to be changes.
So, from our perspective, webuild our software
infrastructure so that you canplug in any provider for any

(48:39):
particular component.
So, whether it's audiosynthesis, video generation,
text completions, we never buildsomething that's directly
vendor dependent.
We always build an abstractionlayer that's actually using the
member junction open sourcesoftware that we've talked about
, so anybody can do the samething we do.
But we don't want to be tightlycoupled to any particular
vendor.
It's not because we don't thinkthat they're great vendors, but

(49:01):
we just want to haveoptionality.

Mallory (49:10):
And I think, too, this is especially interesting when
you consider the downstreameffects of how this might impact
your members as well.
So having nurses, for example,ai nurses that you can call and
talk to on the phone that arereally personable and caring and
empathetic, or AI therapists,maybe, or kind of, the options
are endless.

Amith (49:21):
Totally.
Yeah, I think you know theassociations that I've talked to
about AI and selecting vendorsand stuff like that a lot of
them are concerned about well,do I pick a vendor based on
their current capabilities,their roadmap, their projected
strengths, versus how wellthey're, how well funded they
are?
All of these are good questions.
The problem with all of that isit's kind of like applying the

(49:41):
way you'd select an AMS or anLMS, which you might keep for 10
years, to something that ischanging rapidly.
So I think that you know youmake the best decision you can
today for the models or thesystems that work for you,
whether that's working with anopen AI, anthropic, gemini,
something like that, workingwith an inference provider like
Grok and using open sourcemodels, or Mistraw, which just

(50:03):
released a new small model thisweek that looks extremely good,
and these things.
You might pick a particularvendor that you think is good,
but you just need optionality.
You cannot tightly coupleyourself to something, because
you just have no way ofpredicting whether or not even
the leading labs like OpenAI andAnthropic that do some really
interesting work Even theleading labs like OpenAI and
Anthropic that do some reallyinteresting work will they be

(50:23):
the leaders, even in 12 months,or will there be capabilities
from someone else that you wantto be able to take advantage of.
So you can't really apply theclassical way of selecting
software to this stuff.
You still have to do elementsof that, do your diligence,
think about privacy and datasecurity.
Clearly that's really important.

Mallory (50:46):
But try to build a layer that insulates you a
little bit so that you can makechanges without having to
rebuild all of your stuffAbsolutely Well, I don't think
we'll be having Maya on thepodcast anytime soon, but it's
important to see what's outthere and, hey, it was a pretty
good conversation.
So, everybody, thank you fortuning into this episode.
We will see you all next week.

Amith (51:16):
Thanks for tuning into Sidecar Syn all next week with
AI.
And remember Sidecar is herewith more resources from
webinars to boot camps to helpyou stay ahead in the
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.