All Episodes

April 24, 2025 57 mins

Send us a text

In this special episode of Sidecar Sync, we dive into the future of AI infrastructure with Ian Andrews, Chief Revenue Officer at Groq (that’s Groq with a Q!). Ian shares the story behind Groq’s rise, how their LPU chip challenges Nvidia’s dominance, and why fast, low-cost, high-quality inference is about to unlock entirely new categories of AI-powered applications. We talk about the human side of prompting, the evolving skillset needed to work with large language models, and what agents and reasoning models mean for the future of knowledge work. Plus, Ian shares how Groq uses AI internally, including an incredible story about an AI-generated RFP audit that caught things humans missed. Tune in for practical insights, forward-looking trends, and plenty of laughs along the way.

🔎 Find Out More About Ian Andrews and Groq:
https://www.linkedin.com/in/ianhandrews/
https://www.groq.com

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
https://learn.sidecar.ai

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecar.ai/ai

📅  Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/

✨ Power Your Newsletter with AI Personalization at rasa.io:
https://rasa.io/products/campaigns/

🛠 AI Tools and Resources Mentioned in This Episode:
Superhuman ➡ https://superhuman.com
Groq Cloud ➡  https://www.groq.com
ChatGPT ➡ https://chat.openai.com
Claude (Anthropic) ➡ https://www.anthropic.com
Perplexity ➡ https://www.perplexity.ai

Chapters:

00:00 - Introduction
02:10 - Meet Ian Andrews, CRO of Groq
05:48 - Ian’s Generative AI “Aha” Moment
10:30 - What Groq Is and What It Does
14:57 - The Launch and Growth of Groq Cloud
18:25 - What “High Quality Inference” Really Means
23:10 - Agents and Interns: Speed in Autonomous Work
29:26 - Use Cases: Finance, Legal, Customer Service
35:25 - Member Services Reimagined Through AI
39:06 - How Groq Uses AI Internally: Real Examples
53:23 - What’s Next for Groq in 2025 and Beyond

🚀 Sidecar on LinkedIn
https://www.linkedin.com/company/sidecar-global/

👍 Like & Subscribe!
https://x.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecar.ai/

Amith Nagarajan is the Chairman of Blue Cypress https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory:
https://linkedin.com/mallorymejias

...
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Speed's cool, but I can only read so fast.
Why would I ever need speed,Like I don't need it to go
faster than I can read, which,if you step back you know?
There were probably people whosaid that when we were
transitioning from horses tocars, so it will become a funny
comment in the future.

Speaker 3 (00:19):
Welcome to Sidecar Sync, your weekly dose of
innovation.
If you're looking for thelatest news, insights and
developments in the associationworld, especially those driven
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.

(00:42):
I'm Amit Nagarajan, Chairman ofBlue Cypress, and I'm your host
.

Speaker 2 (00:48):
Hello everyone and welcome to today's episode of
the Sidecar Sink Podcast.
My name is Mallory Mejiaz andI'm one of your co-hosts, along
with Amit Nagarajan, and theSidecar Sink Podcast is your
go-to place for all thingsassociations, innovation and
artificial intelligence.
Today, we have a reallyexciting episode lined up for
you with someone from Grok, andthat is Grok with a Q, as we

(01:13):
always say on the podcast, notGrok with a K.
We don't just have someone onthe podcast today, we actually
have Ian Andrews, the chiefrevenue officer at Grok, so we
have an exciting conversationlined up for you.
Before we get into a little bitmore info on Ian's background,
I want to share a quick wordfrom our sponsor.

Speaker 4 (01:31):
Let's face it Generic emails do not work.
Emails with the same message toeveryone result in low
engagement and missedopportunities.
Imagine each member receivingan email tailored just for them.
Sounds impossible, right?
Well, Rasaio's newestAI-powered platform, Rasaio
Campaigns, makes the impossiblepossible.
Rasaio's campaigns transformoutreach through AI

(01:52):
personalization, deliveringtailored emails that people
actually want to read.
This opens up the door to manypowerful applications like event
marketing, networking,recommendations and more.
Sign up at rasaio slashcampaigns.
Once again, rasaio slashcampaigns.
Once again, rasaio slashcampaigns.
Give it a try.
Your members and yourengagement rates will thank you.

Speaker 2 (02:11):
Ian Andrews is helping to deliver fast AI
inference to the world at Grokas chief revenue officer.
You've got to imagine to workat a place like Grok with a
queue.
You must be pretty impressive,and Ian is exactly that.
Prior to Grok, he served as CMOat Chainalysis, achieving 4x
growth in customers and revenuein under four years.

(02:31):
Prior to Chainalysis, he wasSVP of products and marketing at
Pivotal, where he guided thecompany from launch to
successful IPO and later a $2.7billion acquisition by VMware.
Ian's earlier experienceincludes revenue roles at
market-creating firms likeOpsware and Astro Data.
Ian specializes in launchinginnovative products, scaling

(02:53):
early-stage companies andbuilding high-performing teams.
His expertise spans AIinfrastructure, blockchain
analytics, cloud-nativeapplications and enterprise
software.
So what do we have lined up foryou all today?
Well, of course, we're going tobe talking about AI inference,
what that is, why it matters andwhy Grok is dominating in that

(03:15):
area.
Now Ian's going to provide amuch more sophisticated answer
as to what AI inference is, but,to give you the short and sweet
, it's essentially running theAI model.
So why would we need AI modelsto generate output faster than
we can even read?
Ian's got an answer for that,so stay tuned.
We also discuss how an AIinference company like Grok is

(03:37):
actually leveraging AIinternally.
We have a really awesomeconversation lined up for you
all today, so please enjoy thisinterview with Ian Andrews.
Ian, thank you so much forjoining us on the Sidecar Sync
podcast.
I'm hoping, as we kick off thisepisode, you can share a little
bit with our listeners aboutyour background.
Who are you?
Who is Ian?

Speaker 1 (03:59):
Great question.
So I am a chief revenue officerat a company called Grok.
I've spent about 25 years intechnology.
Actually, very early in mycareer worked with Amit at one
of his companies.
So we go way, way, way back.
And you know I think of myselfas a technologist.

(04:19):
I've always tended to spendtime in early emerging
technology and you know I'vebuilt a career around bringing
that technology to the world.

Speaker 2 (04:31):
Awesome, Awesome.
Well, if you've worked withAmit and I know Amit has pretty
much exclusively worked withassociations for his whole
career Does that mean you havesome familiarity with our
association audience?

Speaker 1 (04:42):
I do Absolutely.
I would you know.
First couple of years of mycareer I live in Washington DC.
I was very focused on theassociation market.

Speaker 2 (04:50):
Awesome, awesome.
And you were a speaker atDigital Now 2022, which is
Sidecar's annual conference.
Is that right?

Speaker 1 (04:58):
I did.
I came and talked aboutblockchain analytics.
At the time, I was chiefmarketing officer for a company
called Chainalysis, and we builta product that allowed
companies and governmentagencies to actually understand
what was going on in theblockchain, and so I think I
maybe scared the audience alittle bit by going deep into
the bowels of cryptocurrency andtalking about North Korean

(05:21):
hackers and Russian ransomwaregangs and narco traffickers.
Hopefully didn't put people offthe technology too much.

Speaker 2 (05:31):
It was a great time.
I was attending Digital Nowthat year for the first time as
an attendee.
I was working for the GreaterBlue Cypress family of companies
and before we started recordingthis episode I remembered that
for me, that event Digital Now2022, is kind of my aha moment
for generative AI.
It's the first time that mymind was really blown and it
kind of trickled this wholecareer path after it.

(05:53):
I thought it would be a funquestion to ask you if you have
had a generative AI aha moment,a moment where your mind was
blown, where you knew I need tobe working in this for the
foreseeable future.

Speaker 1 (06:08):
When ChaiGBT launched and I started playing with it
was probably the first momentwhere I was like, wow, this is
actually very real Because I'veworked for a couple of companies
over the years in the data andanalytics space and so I would
describe myself as fairlyfamiliar with machine learning
and, you know, brought solutionsto customers to help them do

(06:29):
like early web and click streamanalytics, so big data, you know
, kind of at scale informationprocessing.
But honestly, like thattechnology was very hard to use,
very expensive to implement.
A lot of companies didn't getvalue from it.
The first time I startedplaying with ChatGPT I turned

(06:50):
around and showed it to my kidsand my kids picked it up and
immediately started using it andmy now 11-year-old, I think,
went and started writing a storyand then came back and read the
whole thing to me and he hadkind of interactively gotten the
AI to give him, you know,something about superheroes and
dragons and I was like this isvery real and if you know you

(07:14):
rewind back to that period twoand a half years ago.
The technology kind of lookslike a toy in comparison to what
we have today.
So I think I've had a series ofmoments over the last two and a
half years where I'm like, wow,I can't believe that's possible
.
Probably one that comes to mind.
Recently I use a product calledSuperhuman for email and I've

(07:38):
been a longtime Superhuman user.
It sits on top of Gmail and itsoriginal claim to fame was it
just goes really fast.
So it's a very streamlinedinterface and, as someone who
gets you know thousands ofemails a week, like just getting
through that pile is prettydifficult.
And they introduced, you know,some typical AI features along
the way where you can, you know,auto draft a response.

(08:01):
But it was never like supercompelling to me.
I tend to write very shortprecision emails and the AI
wanted to be you know all thisflourish and lots of extra words
that I would never actually use.
I never found that that valuable.
But they just introduced afeature where it will look at
the email and if it's an openquestion or something that

(08:24):
requires a follow-up and I don'tget a response, it will
automatically put a prompt emailin the top of my inbox.
So it'll say and it'll draftthe email and the drafting is
now like precision.
It's basically how I wouldwrite the message and I was
totally blown away and I've beenusing this feature for the last
two weeks and I'm like I'mgoing to get very soon to the

(08:47):
point where I never have towrite email again, like those
reminders are just going to fireoff automatically because now
it's like still human in theloop, but the quality is such
that I don't really need to lookat it that closely.
So I wake up every day and Iget a list of these emails that
superhuman has suggested I sendto follow up on topics that
happened, you know, over thelast couple of weeks that are

(09:08):
left unresolved.
It's very good.

Speaker 2 (09:11):
I think I need Superhuman.
It sounds like I've not heardof it.
It's a pretty cool tool.

Speaker 1 (09:15):
Yeah, I'm sadly not an investor, so I'm not even,
like you know, showing my ownbag here.
It really is a great product.

Speaker 3 (09:24):
I'm going to be downloading it shortly.
You get it on the phone.

Speaker 1 (09:28):
It's great.

Speaker 2 (09:30):
And I wanted to share with our listeners.
You all are actually the firstgroup of people to hear this,
but Ian will be joining us atDigital Now 2025.

Speaker 1 (09:38):
I just signed up.
It's amazing, quite literally.

Speaker 2 (09:41):
I just gave him the dates he's coming.
He'll be there.

Speaker 1 (09:46):
It's blocked on the calendar for sure.

Speaker 3 (09:48):
Awesome.
Well, you know that is eightmonths from now, right?
So just a little bit undereight months from now, and
that's, you know, more than aone X or more than one doubling
of AI power in that timeframe.
So, as we have our discussiontoday, it'll be very interesting
to reflect back on thisconversation that's being
recorded in late March relativeto early November.
You know lots will be happeningin the world of AI between now

(10:10):
and then.

Speaker 1 (10:12):
There's no doubt you know, we could sit here and try
and plan out the content forthat talk today, and it would
almost certainly be obsolete bythe time we get to November
member Agreed, agreed.

Speaker 2 (10:27):
So, ian, I want to talk about Grok.
Where you work, everyone thatis Grok with a Q, just so you
know.
For someone who is not familiarwith Grok, who's never heard of
it, how would you describe whatGrok does?

Speaker 1 (10:37):
So Grok builds a vertically integrated AI
inference platform that givesyou super high quality AI
responses incredibly fast atvery reasonable prices.
Now, a lot of words in there.
So inference engine a fancy wayof saying we run the models.

(10:58):
So we're not.
If you're familiar with OpenAIor Anthropic, you know.
If you're familiar with OpenAIor Anthropic, they are
foundation model companies.
They build models.
They also run them.
Grok doesn't do that.
We run other people's models.
We run lots of open sourcemodels.
So if you're familiar with themeta family of models called

(11:19):
Lama, we run a lot of Lama, lotof Lama.
But we also run open sourcemodels from Google and from
OpenAI, like their whispermodels for speech to text
conversion or transcription.
And Grok was founded by a guynamed Jonathan Ross.
Jonathan is famous for buildinga chip called the Tensor

(11:41):
Processing Unit, which hedesigned at Google as a 20%
project.
People probably aren't thatfamiliar with the TPU unless
they're in AI, but if you'veever used a Google service, from
search to mail, or ever writtenin Waymo, the self-driving car
division of Google all of thatis powered by the TPU.

(12:02):
So TPU is now an incrediblepart of the backbone of Google's
infrastructure.
And Jonathan, after creatingthe TPU came to the realization
that the rest of the world wasgoing to catch up to where
Google was and, like a lot ofvisionary founders, he got the
direction of the market correctbut, quite honestly, missed on

(12:24):
the timing a little bit.
So he founded Grok back in 2016, assembled a world-class team
to build our chip called the LPU, which is the foundational
element that gives us our speedand our economic advantages over
alternatives like NVIDIA.
But he was very early to marketbecause back in 2016, no one

(12:47):
was talking about artificialintelligence seriously.
You know the open AI themselves, I think, were only created
that year.
You know the average enterprisewas hoping to do, you know,
business intelligence on top ofa data warehouse, and that was
kind of state of the art.
Machine learning was stillrestricted to really the high

(13:08):
end landscape, and so thecompany struggled commercially,
actually until that moment wewere talking about.
A little while ago, chatgptlaunched and suddenly we all
woke up to the realization that,wow, the world, how we work,
how we live, is about to changein a really dramatic way, and
you know, I think the defaultfor most people would be, you

(13:30):
know, nvidia as the destinationof choice to run AI workloads,
but obviously, they're one veryexpensive and two very hard to
actually get access to,particularly if you're outside
the US market.
Gpus are kind of generallyunavailable, and so Grok started
getting a lot of attention fromthat moment on as a potential

(13:51):
alternative a contender, if youwill to NVIDIA.
The second big moment, though,happened a little over a year
ago, when we launched Grok Cloud, and so this is the second part
of what is Grok is.
We're not just a chip company.
We're actually a verticallyintegrated service.
So you can go to grokcom rightnow and interact with models

(14:14):
running on Grok.
You can experience the speedyourself, the performance, the
quality, but we have a wholeenterprise platform.
So for anyone who's buildingapplications that use AI, grok
Cloud is a great destination foryou.
We launched that a year ago.
We've had over a million peoplesign up for it, nearly 200,000

(14:35):
monthly active users to our paiddeveloper tier, and we've seen
tens of thousands of signups inthe last two months.
So it really has been this kindof organic phenomenon of
popularity and really somethingspecial in the market.

Speaker 3 (14:55):
You know I can relate to that timeline, specifically
the last well, really the wholetimeline you mentioned, because
some of the stuff we've beendoing in AI dates back to the
early 2010s and the mid-2010sand it was tough going.
I mean, we were selling apersonalization engine in this
market the association marketstarting in 2014, 2015.
And the technology was superrudimentary back then but it was

(15:16):
very difficult to convinceanyone that it was worth
investing in.
Ai was you might as well havebeen selling sci-fi to someone.
So I can definitely relate tothat.
And the more recent history youdescribed is super interesting
to hear from your perspective onthe Grok team, because over
here in our little world wehappen to have a hackathon down
in Florida.

(15:36):
I think it was February of 24.
And I remember I was at thishackathon.
I had heard of LPUs and Grokjust in passing at some point
prior to that.
I had a general vague idea ofit, but then one of the
developers at the hackathon said, hey, we got to check this
thing out and demoed the Grokcloud, I think very early on,
like you guys had just releasedit, and it was insane because

(15:58):
you know we're used to just kindof sitting there like oh, you
want to send your query of anysignificance to OpenAI, go use
the bathroom, get.
They're like oh, you want tosend your query of any
significance to OpenAI, go usethe bathroom, get a cup of
coffee, come back and hopefullyit's there and then he's like
something went wrong.
No-transcript.

Speaker 1 (16:31):
It really is, and thank you for being a customer.
I mean I think you bet veryearly on the platform and were
patient with us as we werescaling up, and that early
support I think led tosignificant success for the
company.
We were able to do a largeequity raise in August last year
which we've invested back intoscaling our global capacity.

(16:54):
By the end of Q2, we'll have10x'd our fleet globally while
driving really significantperformance enhancements at our
networking and software layer.
So combined it's a massivescale-up of Glock Cloud which in
turn fuels the opportunity formore startups, more application

(17:15):
builders, to come and use theservice, which we're very
excited about.

Speaker 2 (17:21):
All right, I want to hop in here as the least
technical person on this callfor all of our listeners.
You mentioned a few terms, ian.
You mentioned inference, tpusand LPUs.
I don't think we defined LPUs,but just to clarify when we
become industry jargon.
But absolutely it's justrunning a model.

Speaker 1 (17:46):
So if you've ever gone to chat GPT and typed in a
question and gotten a responsein the backend, the model ran
your input prompt and providedyou an output response.
That's inference that makessense.

Speaker 2 (18:01):
And then a TPU.

Speaker 1 (18:02):
TPU is just the.
It's a chip that Google usesspecifically to run AI and
machine learning workloads.
Stands for Tensor ProcessingUnit, and then Grok's chip is
called the LPU, or LanguageProcessing Unit.
So yeah, lots of jargon in thetechnology circles for sure.

Speaker 2 (18:23):
So yeah, lots of jargon in the technology circles
for sure, yeah, but it'shelpful and I think that makes
sense.
When you talked about inferencewith Grok, I think you use the
phrases like higher qualityinference and faster inference.
I think speed makes sense, andI've seen Grok in action.
It's pretty mind blowing howinsanely fast these responses
are generated.
What do you mean when you sayhigh quality?

Speaker 1 (18:46):
Yeah, well, if you've ever, if you rewind back, you
know, a couple years, the firsttime you saw ChatGPT, you could
ask certain questions like, hey,write me an essay, you know in
the voice of a ninth grader, andyou might get like a pretty
reasonable essay.
But I think if you looked at itclosely you might be able to

(19:08):
piece together that it wasprobably produced not by a human
but actually by an AI.
And if you now go and run thatsame question against any of the
modern LLMs, you're going toget probably a much more quality
response in terms of the outputof the model.

(19:29):
It might actually be hard todistinguish between
human-produced and AI-produced.
So the measure of quality inthat way I think we've seen a
great progression across AImodels, across AI models, but
specifically in the context ofGrok versus other providers,
there's a few tricks that peoplecan use to go fast.

(19:54):
You can actually reduce theprecision of a model and you'll
get more tokens quickly so thatthe text will return faster.
But the trade-off there is thatyou then get lower quality
responses.
A lower precision model mayproduce output more quickly, but
the output is of lower quality,and so that's often a trade

(20:22):
that is not desirable right.
You can imagine, you wanthighest quality possible at the
fastest speed, and so the othercorner of that triangle is cost.
So usually to have, you know,fast and high quality, you have
to pay a lot.
And this is where I think Grokis unique in the market.
As compared, to, say, nvidia,you know we're significantly

(20:43):
cheaper, much faster andequivalent or better quality.

Speaker 2 (20:49):
It sounds pretty good .
It sounds pretty good.
Cheaper, better precision,faster, I know, in terms of
speed it definitely feels like anice to have if we're just
using, like consumer grade AItools, but can you share some
examples of what kinds ofapplications might become
possible when response timesdrop from seconds to

(21:09):
milliseconds from?

Speaker 1 (21:10):
seconds to milliseconds Totally.
This is a great question andactually when we first launched
Grok, that was the reaction wegot from a lot of people Like
speed's cool, but I can onlyread so fast.
Why would I ever need speedLike I don't need it to go
faster than I can read, which,if you step back, there were
probably people who said thatwhen we were transitioning from

(21:31):
horses to cars, why would I everwant to go further than the
horse can take me in 30 minutes?
Or why would I ever want to gofaster than a horse can gallop?
I can't imagine the possiblescenarios where that applies, so
it will become a funny commentin the future, but I think so.
Let's make some concretesuggestions.
If you've ever been on hold in acall center, you call your

(21:55):
healthcare company or the cablecompany.
You get put in one of thesequeues where they tell you
endlessly how important you areand we're sorry, you're waiting
right Probably one of the mostfrustrating experiences ever.
Imagine that we can scale thatcall center up, not by hiring
more people, but by providing anAI interaction that's as good

(22:18):
as or better than the human tohuman interaction.
So live voice, not text, not achat app, but actually talking
to an agent that sounds like ahuman and has all the same
information retrieval anddecision-making authority as a
human does, like that capabilityexists today, but it requires a

(22:41):
lot of compute to make itpossible.
Because you don't want to talkhey, can you help me with my
cable bill?
And then wait three minutes fora response, like you want it to
be an interactive conversationlike the one that we're having
right now.
And so when you go intodifferent um, different
modalities, from, like, textualonly to audio, speed starts to

(23:03):
matter a lot in terms of thequality of experience.
Um, but I'll take it, go ahead.
I was going to say I'll take itone step further, and this
requires maybe a little bit ofimagination for the audience is
when we start to talk aboutagents not just a call center
agent, but think of agents asbeing almost like employees or

(23:27):
members of your team and staffand staff, and so they have the
ability to autonomously work ontasks that you direct them to
take on.
And I think the state of theart today, basically to frame
what's possible, is you can haveagents that are probably at the
level of a good intern.
You know high school, collegelevel experience, don't deeply

(23:50):
understand your businessnecessarily, but can work
tirelessly at informationretrieval and synthesis, and
they can probably give you a youknow, a pretty good answer.
That's the state of the arttoday in the world of agents and
, as a result, you actually willcarve up tasks into very
specific, narrow domains, right,as you would with an intern.

(24:13):
You don't say, hey, you know,amit, as my intern, can you make
the business plan for the year?
You probably say, hey, can yougo do some research on, like the
market landscape and productopportunity related to you know,
member management software andlike here's three sources that I
want you to make sure you golook at and then you get some

(24:34):
information back and then you gosend them off on another task
that and then that assemblesitself into all the data
necessary to produce that, thatannual business plan.
Agents kind of operate in thesame way and so tying that all
together from a speedperspective, you're now not
thinking about a one-to-oneinteraction.
From a speed perspective,you're now not thinking about a
one-to-one interaction.

(24:55):
You know Mallory and Ian areinteracting.
It's a one-to-many and I thinkmost people are going to have a
fleet of agents.
Your marketing agents willsupport, you know everything
from content measurement tocontent creation, preparation to
dissemination.
Like all of that is going to beautonomously delivered, and so

(25:17):
speed becomes really importanthere.
With these fleets of agentsthat interact with each other,
you're no longer operating inkind of human terms, you're now
in the computer speed.

Speaker 3 (25:30):
You know, ian, building on that, one of the
things we've talked about a lotsince the fall on this pod is
this new class of models, theso-called reasoning models, like
what was called Strawberry,then O1, o3, and then you know
the DeepSeek R1 moments you know, earlier this quarter everybody
freaked out about, and you knowthe key thing that's happening

(25:51):
there that we talk about in thepod is this idea of giving the
model longer to think and soessentially giving it more
compute cycles to do.
The inference you referred toearlier, which of course you
know is a direct tieback to yourcomment about speed, because if
you can do more and more withless actual time, that is
potentially really powerful.
And of course you know there'sa lot of things in AI that

(26:13):
there's big research momentsthat are.
You know someone writes a paperthat reveals that something
like test time compute orruntime compute correlates with
this next scaling law of AImodels, where you take the same
size model but you run it forlonger and you get a better
output.
It's kind of like saying, hey,ian, what's two plus two?
You can instantly answer.
But if I give you a complexquestion and you have to

(26:36):
instantly answer, you're justguessing, whereas if I give you
a couple minutes to think aboutit, you can go solve the problem
, and it's very, very similar towhat's happening within the
chain of thought in the modelsthemselves.
And so I think that feeds backto the point about why fast
inference is super important.
But then, if you do that inparallel a hundred or a thousand
times, because within one ofthese agents, the other scaling
law that's now being quoteunquote discovered, but it's
also, I think, pretty obvious isthat if you have multiple AI

(26:57):
models running in parallelsolving the same problem and
then you ask another AI to pickthe best answer out of 10 or 20
or 50 or a thousand iterationsof the same problem, you can
actually get a much betteranswer again out of the exact
same piece of software, theexact same model.
And of course, you know weourselves, with our Skip

(27:20):
platform, are in fact doingexactly that with Grok, where we
will.
Actually one of the pieces thatSkip has to do is to figure out
writing SQL queries right,which is pretty mundane and
pretty simple.
But if you have a thousanddifferent tables of data and
lots of metadata to understandwhat's in those tables, there's
lots of ways to approach it Justlike a human programmer or a
human data analyst might writedifferent queries and test them.
Well, what if you know Skip cando 10 or 20 or 100 at a time in

(27:40):
parallel and get to the bestquery.
And the same thing for writingcode to present a chart or
whatever.
So we're actually doing thisand this was not possible even
really six months ago.
So the fast inference thing Ithink people do go back to their
personal experience in the chatapplication and of course
that's an amazing use case, butit's just one, one tiny example

(28:01):
of where inference is necessary.

Speaker 1 (28:03):
It's totally true.
I think the anecdote that stuckfor me on these reasoning
models if people haven't usedthem is imagine that I asked you
to write a five-paragraph essayand you're not allowed to use
the backspace key, so you onlyget the opportunity to type
forward.
How well is that essay going tobe written?

(28:25):
You can't go back and revise aprevious sentence.
If you have a typo, you can'tdelete it.
You're only allowed to printforward.
And that's a non-reasoning model, right it?
It is, at the basic level, apredictor of the next uh, few,
uh characters of text and each,each character follows um,

(28:46):
whereas the reasoning models cango back and edit.
And so if you just think aboutyour own ability to produce that
perfect five paragraph essay inone shot versus your ability to
, you know, write out a draft,then go back and review the
draft, realize that maybe thefirst paragraph really should be
the second paragraph and thelast sentence doesn't make any

(29:08):
sense.
You want to rephrase it.
You end up with a much higherquality product.
It takes more time, but withGrok you can.
You can to rephrase it.
You end up with a much higherquality product.
It takes more time, but withGrok you can shrink that time
back down to being better orequivalent to the non-thinking
token.
So it's a dramaticallydifferent experience and
significant improvement inoutput quality.

Speaker 2 (29:27):
Ian, are there any particular sectors or areas
where you're seeing especiallystrong interest or innovative
applications of Grok?

Speaker 1 (29:35):
Yeah, I think there's a couple things going on, Like
I would say and I think aboutthis a lot actually because I
have, you know, three kids, twoof whom are teenagers.
So, you know, there's alwaysthe question of like, what
should I study in school, whenshould I apply my time?
And it seems clear to me thatthe models are very, very good
at information synthesis.
So if you think about any rolewhere, uh, the job is basically

(29:59):
to collect data, um summarizethat data and then present that
information to somebody higherup the the organizational chart,
who's making decisions, makingdecisions like those jobs to me
are very likely to be replaced,maybe first augmented, but

(30:25):
ultimately replaced by AIsystems.
So I was in New York recently,met with a couple of our
customers who are building whatI would describe as like
equities, equity market analysts, right, and they actually
demoed.
You know, a company earningscall was going on and they're
taking a live transcript of thatcall and comparing it against
every previous earnings calltranscript, public statement,

(30:48):
you know, press release thecompany's ever put out.
So they're looking forinconsistencies, they're looking
for changes in previouslystated strategy.
They're comparing the financialnumbers against, you know, the
previous forecasts as well aswhat the market consensus was
going into, the earnings call.
You know that used to be a teamof people not particularly
senior people in theorganization, right, that's like

(31:09):
an entry level kind of WallStreet job.
That's now a softwareapplication, and so I would say
you know you can find that inother industries as well.
Like I think about legal work,you know there's the partner who
has a ton of experience intricky situations.

(31:29):
You know they've worked in theindustry for a long time.
They have a lot of context.
But underneath that partner youhave a team of associates and
underneath them you have a teamof first years, and those first
years do again that informationsynthesis task.
They'll go research, precedent,case law, They'll do all the
discovery analysis, Like all ofthat is being consumed by AI.

(31:51):
All of that is being consumedby AI.
And so I think you can takethat kind of paradigm of
anything that's informationsynthesis and summarization
should probably be moving from ahuman-driven task to an AI,
initially augmented and thensupplanted situation.

(32:12):
I think the other thing is thatwe're seeing accessibility of
information change quite a bit.
So if you imagine how the threeof us would have had a meeting,
you know, let's say,pre-pandemic five years ago, we
probably would have scheduledthat meeting.
We would have tried to gettogether in person.
I would have flown down to NewOrleans and we would have sat

(32:34):
across the table from each other.
Then the pandemic happened andwe all ended up on Zoom or one
of the other equivalent meetingservices, but the meeting itself
still stayed in that virtualroom.
Right, you could record it anddistribute the recording.
But let's be honest, like howmany of us sat down and like
watched somebody else's meeting?
It certainly is not my favoriteway to do it.

(32:57):
But now, with AI in the meeting, not only can I get a recording
which is turned into a fulltranscript, I can get a
summarization that's veryfocused on the type of
conversation that I'm trying tohave.
So if it's a candidateinterview, it tells me exactly
what you know.

(33:18):
It focuses on candidatestrengths, weaknesses,
background, you know.
If it's a customer meeting, itfocuses on you know next steps
and key elements of thediscussion, and that's like
incredibly productive.
But if you think about what'svaluable there is, now I can
share that content with everyonein the organization.
So you suddenly, like you get alevel up from, I think, like

(33:43):
the information sharing which ishard as organizations get
bigger, to keep everyone on thesame page, like that is changing
dramatically.
And then maybe the third exampleI'll give is just the whole
concept of customer service hasbeen economically driven to.

(34:06):
You know, like call centers aresort of the most expensive means
of interacting with yourmembership or your customers.
And so the entire strategy ifyou operate a call center of any
scale is how do we have peoplenot actually talk to humans?
Like?
Those are the metrics how quick, how many people can we defer

(34:27):
from call center and how quicklycan we end the calls that show
up in the call center, which, ifyou think about it, is totally
misaligned to providing qualitycustomer experience.
It's like oh, if you have aquestion, we should probably
just go to my website and likeresearch it and find it on your
own.
And I think what's happenedwith AI is now we can actually

(34:47):
provide the most knowledgeableperson in your organization
times an infinite number ofclones of that individual who is
always available, never tired,never goes on a coffee break and
can be available via text,audio, whatever format or
modality you want.
And so it just expands themeaningful ways in which you can

(35:11):
interact with your customers oryour membership well beyond
what we were able to do in afinancially reasonable way in
the past, and so I would kind ofcategorize, like the trends
that I'm seeing along thosethree axes.

Speaker 3 (35:25):
You know, related to the last point you made, in the
world of associations we'retalking about member services.
Much like their corporatecounterparts, customer service,
member service is.
I wouldn't say that it's lookedat exclusively as a cost center,
but it's definitely thought of,at least partially, in that way
, and associations haveattempted to put in place

(35:46):
various kinds of technology tohelp improve and automate and
increase efficiency and so forth.
But principally, the way I wouldtranslate what you said is that
it's been all about theorganization having lower cost
per transaction or perinteraction, which leads to the
misalignment you described.
And I think, beyond just theefficiency, what you just said a
moment ago about having themost knowledgeable person in

(36:06):
your organization, or maybe eventhe most knowledgeable person
on the planet, available at alltimes, not only solves the
problem the customer thinks theyhave, but also flips the script
and creates the opportunity inour world for associations to
provide more value morecontinuously, so that rather
than the member only callingwhen they really have to because
they don't quite like it.
You know, for the reasons youmentioned, they were like wow,

(36:28):
this is amazing.
I really should be calling thismember services hotline a lot
more often or emailing thembecause I can get instant
amazing results.

Speaker 1 (36:37):
A hundred percent If, if people haven't tried uh
1-800-CHAT-GPT like you have todo this.
I mean I I don't make a lot oflike actual dialing the phone,
phone calls these days more of atext message person, but the
experience is incredible and youknow that the technology is now
at such a state that it'sreasonable for everyone

(37:00):
listening to this podcast.
Your organization could have asimilar experience.

Speaker 2 (37:07):
Well, I was going to.
This might be more of aquestion for you, amit, but it
sounds like on paper.
As I said, grok seems amazing.
Grok with a Q faster, betterquality.
I think member services is ahuge category, which we actually
just did a series of episodeson Ian right before this one,
talking about AI, augmented orenabled member services.
But, amit, I'm wondering forour listeners who are in

(37:27):
agreement Grok is amazing,sounds good.
What do we do with thisinformation?
Like, do we need to be askingour vendors if they're using
Grok?
Do we need to be doingsomething ourselves with Grok?
So, amit, what's your take onthat?

Speaker 3 (37:40):
My thought process, first and foremost, is that the
awareness of this technologyexisting and being available at
scale, at low cost, with highquality is important to note in
your brains, because a lot ofpeople in their minds still have
a year old or two year oldexperiences lodged in there
saying, yeah, you know, chat GPTwas interesting, but it was not
that great and it was kind ofslow.

(38:01):
I can't imagine using that withmy members, right?
And so that type of datedperspective can be problematic
in terms of strategy when youthink what is possible.
So I think that's the firstthing.
And then the second part that Iwould move on to share is that
when you think about havingeffectively unlimited free or
not literally free, but close tofree cost of incremental

(38:23):
inference and heading in thatdirection, right, the cost
curves are very quickly pushingall these costs further down.
It opens up new possibilities,right.
The thing I mentioned aboutrunning 10, 20, 50, 100 queries
in parallel to test the bestquery, to give the customer the
best answer.
That wasn't possible even sixmonths ago and it would have
been potentially technicallypossible but just not cost

(38:46):
effective, not reasonable interms of time.
So I think, more than anything,it's about opening your mind to
the possibilities and startingto rethink business models.

Speaker 2 (38:54):
Ian, I wanted to ask about a saying that I hear many
times, not just about tech, buteven in business all the time
and life, which is the cobbler'schildren having shoes or not
having shoes.
I should say I want to know howGrok, one of the most you know
leading companies in the worldin terms of AI, is using
artificial intelligenceinternally, because you would

(39:16):
think right that if you're doingall these incredible things
with inference internally, youmust be doing incredible things
as well, so I was hoping youcould talk a little bit about
that.

Speaker 1 (39:25):
Well, we definitely use it in software development,
which for this audience may benot as interesting as some of
the other items I talk about,but you know it's an accelerant.
We're a relatively smallorganization Grok's about 300
employees and we're building alot right.
We're doing everything fromsemiconductors to deploying a

(39:57):
global cloud to the wholesoftware interaction layer that
wraps around that to make iteasy to use for everyone.
That's a customer.
So you know, in that way AI isscaling our ability to build and
deliver technology.
But I'll pull a couple examplesfrom my organization.
Just recently we had a proposalresponse where a prospective
customer had sent us an RFP youknow fairly complex document,

(40:17):
like 10 pages of questions, andyou know we diligently prepared
a response.
You know kind of a typicalhuman driven.
Multiple people reviewed it.
You know it was on my desk, itwas on a couple of the other
exec teams desk because it waspretty important customer
opportunity.
And at the end we're like, yeah, we're ready to ship this thing
.
We said, hey, let's just letlet the AI take a look at it.

(40:41):
And so we took both theproposal and the original RFP,
submitted it and said, hey, whatdo you think about our proposal
?
Is there.
Any way we can improve it.
That was the prompt, nothingmore complex or sophisticated
than that, and the response thatcame back was incredible.
It said, you know, it caughtthat we had some inconsistencies

(41:04):
, you know, because they're apretty long RFP, and so they'd
asked kind of the same questionin a few different ways.
It's like, well, in section oneand section three you're
referencing the same thing, butyou've indicated like different
values in the response, and itwas like, you know, you failed
to clearly answer this questionout of section two and section

(41:28):
four is just kind of like poorlywritten, like you could clean
this up and the clarity would bemuch higher.
And so it was like we had anexpert, uh, member of the team
with both like very goodcopywriting skills but also, uh,
domain knowledge, to build areason about what was a very
technical subject.
Um, and so this, to me, was oneof those aha moments we we

(41:51):
talked about earlier where Iwasn't actually sure what I
would get back, if there wasgoing to be anything valuable in
this response.
It was really kind of a finalcheckoff in in the process that
we just, you know, happened tothrow it in there, and now I'm
like, wow, I'm never going tosend a proposal to anyone again
without doing this step, becauseit it clearly caught some
things that were missed by the,by the humans involved, and so I

(42:15):
think that we're constantlypushing the boundaries of what,
what parts of our jobs can weautomate, eliminate or or
augment, and it's a lot of fun.
I actually really encouragepeople to go out and try it.
Your comment earlier, amit,about people having one

(42:35):
experience where like, oh, itcouldn't do this I think the
mindset shift that's necessaryto excel in the current time is
to assume the AI is like one ofyour kids or an intern.
As I said earlier, they'regoing to need fairly precise
instructions, but they learnreally quickly, and so, if you

(42:56):
don't quite get the answer thatyou want or the result you know,
send another prompt like refineyour original request, provide
a little bit more information.
Sometimes you can even chaintogether prompts, so you can ask
one LLM hey, I want thefollowing things Can you help me
write the prompt that will getthis other model to do this for

(43:17):
me?
And so you get the promptwritten by one AI and then you
hand that prompt off to anothersystem and you tend to get much
better results.
We actually have some customerswho are doing that where they
use they've built their ownimage generation model but they
use Grok to improve the customerprompt.

(43:37):
So you know a user might comein and say, hey, I want a
picture of leprechauns becauseit's St Patrick's Day, as we
just had a few days ago.
You know dancing in a field,but that prompt is not
particularly specific and soyou're liable to get like either
low quality images or kind ofrandom, and so they'll feed that
transparent to the user throughan LLM running on Grok and then

(44:00):
the output of that goes intotheir image generation model and
so they get much higher output,higher quality images, as a
result.

Speaker 3 (44:07):
Yeah, it's pretty awesome.
It goes to the point both of howquickly this stuff is changing
but also the skill sets that youneed to work effectively with
these tools, like this wholearea of prompt engineering or
prompt design.
And on the one hand, you know,the smarter the AI gets, either
through an iterative model or amulti-stage model like you're
describing, or just because themodel's smarter inherently, it

(44:34):
might make prompt design perhapsa little bit less of a
challenge on the one hand, buton the flip side of that,
there's always skills to learn,and so what you're describing, I
think, is a good reminder toall of our listeners that you
just continue to hone and refineyour skills and not being stuck
in a particular mindset.
I find myself doing this allthe time, where I just get used
to using a tool a certain wayand I ask the same questions.
Right, we all have that.
You know.
Our neural nets have thoseGrand Canyons that get formed

(44:56):
after multiple repetitive usesof a particular pathway and it's
hard to break out of thoseGrand Canyons.
So it's really reallyinteresting.
The AI has no such problem, butwe have to reinforce our own
you know desire to change,constantly learning.

Speaker 1 (45:11):
It's my wife and I.
We're taking the kids onvacation in a few weeks and we
have the opportunity to design amenu, you know, from a set of
choices that we'll have whilewe're on vacation.
And my wife started kind oflike filling it out by hand and
she's asking everybody what theywant to eat.

(45:32):
And I just took this PDF menuand I threw it into an LLM and
said, hey, five of us are goingon vacation, we've got five days
worth of meals, but we're goingto go out to dinner one night,
so skip that one.
And we want to have aparticular breakfast over Easter
, so make sure that's pancakes.

(45:52):
And you know we're going todepart, you know, early morning,
so you don't need the full day,just breakfast.
Give me, give me a menu, youknow, and it spit, it, spit it
back in like two seconds and itwasn't perfect, it was, you know
, recommended some things thatthe kids were like, oh, that's
gross, dad.
It's like, okay, well, the kidsdon't like these foods, can you

(46:12):
fix it Right?
And then, pretty quickly, wegot the, we got the full menu
taken care of in like you know,three minutes.
So you can even find stuff inyour own lives.
That has nothing to do withwork where you can start to
apply the technology and youstart to fill the boundaries of
what's really useful and whatmaybe doesn't quite work so well
yet.

Speaker 3 (46:32):
I did something the other day similar to that.
I had an Excel spreadsheet thatI had used.
I had this ski trip with agroup of friends and I had a
whole bunch of data in thisspreadsheet about money we spent
and shared expenses andreallocating it and all this
stuff.
I just dropped it into an LLMand I said, hey, analyze the
spreadsheet First of all.

(46:53):
Are there any incorrect thingsin there?
And on top of that, tell me,like, where we could have been
smarter in terms of how we spentmoney and tell me, give me an
idea of, like, how we spent themoney on this trip.
And so that was actually runthrough Anthropix latest cloud
3.7 SANA model, but on thedesktop, which I enabled that
feature where it can run its owncode inside the artifacts
feature.

(47:14):
So it started writing a bunchof code and I was seeing it
write the code and I waswatching it.
I'm like man, that's prettygood.
And then it started running thecode.
It had errors.
It ran the code again, had moreerrors, it fixed it and then
after about maybe 45 seconds,something like that, it showed
me this thing which was like apie chart of how we spent the
money, the allocations ofdifferent people.
It called out one of the guysrented a particularly expensive

(47:35):
car and they're like yeah, youdidn't need to spend that much
on this rental.
I thought it was awesome.
I'm like that's way too muchinference to throw.
I probably, like you know,burned a thousand gallons of
water or something gallons ofwater or something.

Speaker 1 (47:53):
But yeah, it was good .
It's so funny how the LLMs arelike very conservative on money
spending.
I've had other people tell mestories about like throwing
their credit card bills in thereand asking for guidance.
It's like you know you reallyshouldn't be taking so many
Ubers.
Like public transportation'savailable, you'd save like $70.
So there's something in thetraining data set that is like
very frugal.
You know they've been watchinglike late night CNBC or

(48:14):
something like that, the MoneyStuff podcast or something maybe
.

Speaker 2 (48:19):
Claude calling you out, so you don't have to call
yourself out or your friends out.
Ian, I'm going to put you onthe spot.
Amit mentioned Claude.
I'm a big Claude fan over here.
Do you have a preferred modelthat you go back to or do you
kind of switch it up?

Speaker 1 (48:32):
You know I tend to switch around based on tasks,
but I definitely, like Claude, Ithink what they're doing in
terms of software development isoutstanding.
User interface is really good.
I'm a big fan of Perplexitybecause they've really focused
on web search combined with LLMs.

(48:52):
So for answering questions andnow their deep research feature
I think is fantastic.
I was having a debate with oneof the folks here at Grok the
other day about the cost ofoperating infrastructure and
data centers in Europe and theyhad a particular point of view

(49:13):
on how much more expensive theUK was relative to some place
like Norway where energy isabundant and nearly free.
And I said I don't think you'reright.
And they said no, no, I'mabsolutely right, I know the
answer.
And so I went and looked it upand, as opposed to having to
assemble all this data myself,you know, I just kind of fired
off a quick basic query likegive me the you know operating

(49:36):
expenses for data centers inEurope and you know, in a couple
minutes it came back and hadresearch with citations about
the sources of data and itturned out I was.
I was right.
It's about 2x more expensive torun a server in the UK over
Norway, but not 10x, which iswhere my colleague was arguing.
I was arguing their point, soI'm a big fan there.

(50:00):
Openai has got some great toolsas well.
I don't know.
Just yesterday they launched apretty incredible update to
their audio models.
They launched a prettyincredible update to their audio
models.
So if you haven't go play withOpenAIfm and you can interact
with a bunch of differentgenerative voice models, it's a

(50:25):
ton of fun, will do.

Speaker 3 (50:26):
We've talked about audio a couple of different
times so far on this pod indifferent contexts, both in
terms of importance of inferencespeed to provide real-time,
high-quality audio, what youjust mentioned.
And there's so much innovationhappening in the world of audio
and a lot of it's happening inthe world of open source.
I mean, even OpenAI has playedthat game, at least on one side
of the audio equation withWhisper that you guys have
turbocharged on Grok, which issuper exciting, and I find that

(50:51):
to be particularly compellingbecause that's kind of our
natural modality.
You know, that's the way wetend to interact with each other
, so the fidelity and resolution, so to speak, of audio over
text is dramatic and I thinkthere's an opportunity there.
The question I wanted to ask you, ian, is how do you feel about
the domain of?
You know we talked aboutresearch and synthesis of
information, perhaps someanalytics, but part of what a

(51:11):
lot of folks are talking aboutis, you know, and I have
teenagers as well, as you knowand guiding them on career
decisions is.
I think that's a fool's errandon a good day, but nonetheless I
try to do it myself as well andI just say, like, try to do
things where there's humanconnection and communication.
That's emphasized.
My question to you is likeespecially with audio becoming
so lifelike, what are yourthoughts on that in terms of AIs

(51:31):
being able to form connectionswith people of sorts?
Right, you might put theconnections in?

Speaker 1 (51:39):
air quotes, but nonetheless it might feel as
such to a lot of users.
Yeah, it's not something thatI've personally gotten into.
To be honest, like many people,I've probably seen the movie
Her a few too many times, whichpaints a little bit dystopian
outcome for getting too attachedto your AI.
Uh, but I definitely knowpeople who, um, have have formed
those, those bonds, and theytalk emotionally to the, to the

(52:02):
AI.
You know they say please andthank you, and although I've
been told that it, it turns outif you're mean in your prompts
you actually often get betterresults.

Speaker 2 (52:12):
I'm pretty nice in my prompts.
I'm just going to say it.

Speaker 1 (52:16):
Yeah, exactly Like you think you act like you're
probably nicer than you might beto some of the real humans in
your life.
It's a really common thing, andso I think there is a real
potential.
As these things get morelifelike and they get much more

(52:36):
emotional, or sympatheticallyemotional, maybe, that you're
just naturally going to beattached to them.
This was kind of the pitcharound the GPT-45 model.
Openai came out and said, hey,it's not a frontier model, it's
not going to blow away peoplewith new achievements on
benchmarks, but we think it'sgoing to be much more pleasing

(52:58):
to interact with as a humancompanion, which is a very
interesting kind of positioningtactic.
Positioning tactic and that waskind of the early result was,
you know, people like, oh yeah,I actually feel like I could
chat with this thing for a whileand talk about, you know, not

(53:23):
work problems, not likesynthesize this data.
But you know, here's somethinggoing on in my life.
Let me give me some advice.
So I think we're going there.
It's going to happen.

Speaker 2 (53:33):
As we wrap up this episode, Ian, I want to hear
what are you looking forward toat Grok and what AI
infrastructure things do youthink we'll be seeing in the
next few years, six months,whatever that may be.

Speaker 1 (53:45):
Well, we're on a mission to bring inference to
the world, which means bothmaking it globally available and
affordable for everyone.
So we're investingsignificantly in our core
technology.
So we've got new chips comingthis year that we're super
excited about faster, moreefficient, able to run even

(54:08):
bigger models, as the FrontierModel Labs continue producing
those.
We're rolling out new datacenters around the world, so
adding capacity every week andbringing that to more markets
all around the world.
And we're building some reallycool application level tools to

(54:28):
make consuming this easier andeasier.
So you know, stay tuned formore on that front.
But you know, going from ideato working in production is just
going to get shorter andshorter and shorter as we go
through the year.
So it's a lot of fun this year.

Speaker 2 (54:47):
Awesome, awesome, ian , thank you so much for joining
us on the Sidecar Sync podcast.
This was a really funconversation and I know our
listeners are going to get a lotout of it, so thank you for
joining us.

Speaker 1 (54:57):
My pleasure, Mallory.

Speaker 3 (55:01):
Thanks for tuning into Sidecar Sync this week.
Looking to dive deeper?
Download your free copy of ournew book Ascend Unlocking the
Power of AI for Associations atascendbookorg.
It's packed with insights topower your association's journey
with AI.
And remember, sidecar is herewith more resources, from
webinars to boot camps, to helpyou stay ahead in the

(55:23):
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.