All Episodes

February 20, 2025 53 mins

Send us a text

This week on Sidecar Sync, we take a deep dive into the Paris AI Action Summit, exploring global AI trends, governance challenges, and key takeaways from this high-profile event. With 60 nations signing the final declaration—but the U.S. and U.K. notably opting out—what does this mean for AI’s future? We also discuss the Reuters legal victory against Ross Intelligence, a major copyright case with potential implications for AI training and content use. Plus, we reflect on how associations can stay competitive in the rapidly evolving AI landscape. Don’t miss this packed episode filled with insights, debate, and strategies for AI-driven success!

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
https://learn.sidecar.ai

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecar.ai/ai

📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/

🚀 Find out more about the upcoming Blue Cypress Innovation Hubs!
Washington, D.C.: https://bluecypress.io/innovation-hub-dc
Chicago: https://bluecypress.io/innovation-hub-chicago

🛠 AI Tools and Resources Mentioned in This Episode:
DeepSeek ➡ https://deepseek.com
Llama 3 ➡ https://ai.meta.com/llama/
OpenAI GPT Models ➡ https://openai.com

Chapters:
00:00 - Welcome to Sidecar Sync
02:10 - Key Highlights from the Paris AI Summit
08:38 - Global AI Governance: Who Signed & Who Opted Out?
13:10 - Is the U.S. Really Dominating AI?
17:48 - AI Safety, Containment, and the Role of Good AI
26:12 - Deceptive Misalignment: A Growing AI Risk?
29:27 - Reuters' Legal Win: What It Means for AI & Copyright
37:30 - How Associations Can Protect Their AI-Used Content
43:02 - The 95% vs. 100% Content Challenge
50:24 - Closing

🚀 Sidecar on LinkedIn
https://www.linkedin.com/company/sidecar-global/

👍 Like & Subscribe!
https://x.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecar.ai/

Amith Nagarajan is the Chairman of Blue Cypress https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory:
https://linkedin.com/mallorymejias

...
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Mallory (00:00):
This is your time to be creative, to stand on the
shoulders of AI giants out thereand I've got to say it to stop
thinking about your crusty oldAMS.
If you know, you know.

Amith (00:15):
Welcome to Sidecar Sync, your weekly dose of innovation.
If you're looking for thelatest news, insights and
developments in the associationworld, especially those driven
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.

(00:38):
I'm Amit Nagarajan, Chairman ofBlue Cypress, and I'm your host
.
Greetings and welcome to theSidecar Sync, your home for
content at the intersection ofAI and the world of associations
.
My name is Amit Nagarajan.

Mallory (00:55):
And my name is Mallory Mejiaz.

Amith (00:57):
And we are your hosts and we have two super exciting
topics today at thatintersection AI and associations
.
Exciting topics today at thatintersection of AI and
associations.
And before we hear all aboutthose two topics in this week's
podcast, let's take a moment tohear from our sponsor.

Mallory (01:12):
If you're listening to this podcast right now, you're
already thinking differentlyabout AI than many of your peers
, don't you wish there was a wayto showcase your commitment to
innovation and learning?
The Association AI Professional, or AAIP, certification is
exactly that.
The AAIP certification isawarded to those who have
achieved outstanding theoreticaland practical AI knowledge.

(01:35):
As it pertains to associations,earning your AAIP certification
proves that you're at theforefront of AI in your
organization and in the greaterassociation space, giving you a
competitive edge in anincreasingly AI-driven job
market.
Join the growing group ofprofessionals who've earned
their AAIP certification andsecure your professional future

(01:56):
by heading to learnsidecarai.
Amit, I was going to say hello,but I should probably say
bonjour, because we're talkingabout the Paris AI Action Summit
today.
How are you doing?

Amith (02:10):
I'm doing great.
I should have picked up acroissant or something on the
way here.
That would have been good.

Mallory (02:16):
Do you speak any French ?

Amith (02:18):
Not at all.
One of my kids has been in, orwas in, french immersion school
all the way through eighth grade.
She's in high school now, sonot doing that anymore.
But yeah, I have zero skills inFrench.
I find it.
It's a beautiful language, butI find it extremely intimidating
.
I never felt that way abouttrying to learn any other

(02:38):
language, really, but Frenchkind of puts me on my heels.

Mallory (02:42):
Yeah, I think the pronunciation particularly, I
think like grammar wise, it'squite similar to Spanish.
I took French in high schoolbut I don't speak any French.
I just think the sound of it iskind of intimidating, as you
said.

Amith (02:55):
Yeah, it sounds super cool, but it definitely sounds
like a very fancy language.
It sounds like my way ofcomplimenting, just to be clear,
but I've been to France andFrench-speaking countries of
various kinds many times and Ilove it and I would say, after a
couple of drinks, it tends tobecome a lot easier to speak in
French, even though I still haveno idea what I'm saying, but

(03:17):
maybe AI will help me the nexttime I get over there.
In fact, the last time I was inParis, I was in France for
about a week and I was in Parisfor a few days.
Out of that was right aroundthe time Sam Altman got fired
from OpenAI.
If you remember that episode,whenever was that?
Late 2023, I think.
So, yeah, it was an interestingtime to be there and Mistral

(03:40):
was just getting launched atthat time.
I think, so, yeah, lookingforward to chatting all about
Paris and AI.

Mallory (03:49):
For sure my memories France, great country, french,
great language.
But I do have a funny memory ofmy now husband and I driving.
We had rented a car and we'redriving to southern France and
don't speak French, of course,southern France and don't speak
French, of course.
And so we get to this tollsection where you would think

(04:09):
they would have images on thetolls of, like which lane to get
in that were more explanatory,but it was just words, and so we
picked one, and we evidentlypicked the wrong line, and we
had a line of people behind usyelling at us in French, and so
I'm thinking now I would havebeen nice to maybe have like an
in-ear translator.
Maybe not, maybe it's actuallybetter that I don't know what
they were saying, but we werejust screaming at the woman in

(04:30):
the little phone call box oflike let us through, let us
through, we'll give you money,please, please, let us through.
So that was a fun memory forsure.

Amith (04:39):
Was there actually, like, something stopping you from
driving through?

Mallory (04:42):
Yes, yeah there was a gate and then a woman speaking
to us through a box and we werelike, at that point, just
willing to throw euros at her,like we'll give you any, please
just open the gate.
Open the gate so these poortruckers can get through and do
their job.
But yes, that is certainly afunny memory for me when it
comes to French.
This is somewhat related to me.

(05:04):
I don't think I've talked toyou about this just yet, but
Sidecar has been working withthe DC Bar organization to host
a series of custom webinars forthem, and I got permission from
my contact there Katarina shoutout if you're listening to talk
about this on the podcastbecause I thought it was so neat
and it's called the AI Olympics.

(05:25):
So these series of webinars areactually feeding into a contest
that they're hosting internally,called the AI Olympics, which
is also relevant because, as weknow, the last Olympics were
held in Paris, and the idea isthat we host these webinars.
We're teaching their staff allabout all the tools that we know
at Sidecar some of which we useall the time and some of which

(05:46):
are new to us that we've beendiscovering in the process of
planning these webinars and theAI Olympics.
At the end they're awardingactual medals and gift cards and
the idea is that if a staffmember submits a proposal for a
tool but it's not tool specific,it's actually like for a
business process improvement orsomething like that DC Bar will

(06:09):
give them a medal and then fundthe project, which I think is
really cool.
So Sidecar is thrilled to be apart of that.
But I thought it was a veryinteresting idea to kind of
structure AI experimentation inthat way like the Olympics for
an association.

Amith (06:25):
That's creative, and so are they going to do
breakdancing as part of theirOlympics.

Mallory (06:35):
Oh, man, they should hey, if it solves a problem for
DC Bar.

Amith (06:37):
I think breakdancing should be in the Olympics.
Yeah, we'll see an AI versionof that.
That'd be good.

Mallory (06:40):
As long as Australia is not in it, right?
Just joking, just joking, allright.
First topic of today we'retalking about the Paris AI
Summit a little French theme onthe sidecar sync today, and then
we are going to talk about theReuters legal victory in a
copyright infringement case andwhat that kind of means for the
greater AI landscape.

(07:01):
So, first and foremost, theParis AI Summit was held on
February 10th and 11th of thisyear, and it was a significant
global event focused onartificial intelligence.
Of course, it was co-chaired byFrench President Emmanuel
Macron and Indian Prime MinisterNarendra Modi, and the summit
brought together representativesfrom over 100 nations,

(07:22):
including government leaders,international organizations,
civil society, private sectorand academic communities.
The summit centered around fivestrategic focuses.
That was public service AI, thefuture of work, innovation and
culture, trust in AI and globalgovernance of AI.
Discussions emphasized the needfor an inclusive and diverse AI

(07:45):
ecosystem grounded in humanrights, ethics, safety and
trustworthiness.
The summit also highlighted theimportance of bridging digital
divides and assisting developingnations in AI initiatives.
So what were some kind ofnotable developments on this AI
summit front?
Well, unlike previous AIsummits that concentrated on
potential dangers, the Parissummit emphasized AI's positive

(08:08):
potential, particularly infields like medicine and climate
science.
The summit saw significantinvestment announcements,
including 110 billion euros inpledges from various sources.
There was also a noticeableshift towards less regulation,
with French President Macronadvocating for cutting red tape
to foster AI innovation inEurope, and a Coalition for

(08:32):
Sustainable AI was formed,focusing on measuring and
mitigating AI's environmentalimpact.
We know that the US was inattendance.
Vice President JD Vanceemphasized AI opportunities over
safety concerned and warnedagainst excessive regulations.
He also mentioned that the USwould be dominating in AI In

(08:53):
terms of the UK, along with theUnited States.
The UK declined to sign thesummit's final declaration,
citing a lack of practicalclarity on global governance and
national security challenges.
Citing a lack of practicalclarity on global governance and
national security challenges.
Unlike the US and the UK, whodeclined to sign the summit's
final declaration, china didsign, which is pretty notable,

(09:18):
and there was also recognitionthat Europe had lagged in the AI
race, with calls for a wake upin European strategy.
The summit ultimatelyculminated in a declaration
endorsed by 60 nations, callingfor an inclusive and open AI
sector, but the absence ofsignificant AI risk discussions
and the US and UK's refusal tosign highlighted ongoing
international divisions in AIgovernance and approaches.

(09:39):
The Paris summit seems to havemarked a shift in global AI
discourse, moving from safetycentric discussions to a more
opportunity focused and actionoriented approach in AI
development and governance.
So, amit, I want to say this isepisode 70,.
By the way, for all of ourlisteners, I want to say we
covered the last AI summitbecause this is kind of ringing

(10:01):
a bell.
It feels like deja vu and thatwas not a pun.
But what are your thoughts onthis AI summit, the 2025 AI
summit?
Because this is kind of ringinga bell.
It feels like deja vu and thatwas not a pun.
But what are your thoughts onthis AI summit, the 2025 AI
summit?

Amith (10:10):
Well, let's talk about France is making me kind of
hungry because I'm food-oriented.
Yeah, so I'm going to have tofigure that out after we get
done recording, but I'm excitedabout the fact that we have a
global conversation happening.
I'm not surprised that there'sdisagreements, as there should
be, in the discourse, you know.

(10:30):
I think that the conversationshifting from concern and
regulation, centric views andsafety to being how do you move
forward, I think is importantbecause the idea that Europe,
which is obviously a major worldpower, collectively, even

(10:50):
individually, many of theindividual states, are
significant.
They need to get on the bus,they need to get going, and
France has been the outlier outof all of the EU nations and has
done more and more, and I'mreally happy to see them
continue to push in that area.
They're extremely wellpositioned to do very well in AI
because of their energy sector,because of their capabilities

(11:14):
in terms of academia and mathand science, their strengths
there historically.
So I think they're wellpositioned for it.
The bottom line is there is acombination of a need for an
open environment from aregulatory perspective as well
as a lot of capital, and they'readdressing both of those things
, at least on paper.
So it's exciting.
You know, my view is that the200 billion euros, which I think

(11:38):
the number you said was theoriginal announcement, but it's
grown since then because I washearing the number 200 billion
thrown around, which is great.
It's a good start and it's notenough to put Europe really on
the map in terms of AI, but it'sat least significantly greater
sum.
It's an order of magnitudelarger than anything I've heard
of previously announced in theregion, so I'm excited about it
is the short version of myanswer.

Mallory (12:07):
Yep, what do you think about JD Vance's comments that
the US is dominating AI and willcontinue to dominate AI?
I know a lot of the major AIplayers are US-based companies,
but we've also seen some actionfrom China Mistral as well,
which is based in France.
So kind of what are yourthoughts there in terms of what
country might be dominating thatspace?

Amith (12:21):
I think there's two sides to that coin space.
I think there's two sides tothat coin.
On the one hand, I thinkthere's plenty of reasons to
argue that, at the moment, theUS is clearly in the lead on the
most cutting edge frontiermodels.
I think that's objectively atrue statement.
I think that, on the hardwareside, we clearly have leads, not
only because of NVIDIA, butbecause of other types of
hardware that's accelerated, allof which, as far as I know, all

(12:43):
of which is coming from theStates.
Other types of hardware that'saccelerated all of which, as far
as I know, all of which iscoming from the States, because
of our venture capital industry$97 billion of funding just last
year for VC, and Europe it'sliterally at one-tenth of that.
So, even though the economy issmaller than the US but is
similar in size, and thepopulation is very similar in

(13:03):
size.
So risk tolerance versus riskadversity is a cultural shift
that needs to happen, but I dothink that that is a strength of
the United States is innovation, risk tolerance, and that's
going to lead to good thingshappening.
We need to have the capital.
We need to have the regulatoryenvironment continue to be good.
The flip side of that coin,though, is that you could argue

(13:25):
quite effectively that theUnited States is not in the lead
If you were to say let's takethe collective resources of the
rest of the world, certainly,but even individual large nation
like China being able toinnovate at the pace they've
been able to innovate with theirhands tied behind their back in
terms of hardware access andaccess to top technologies on
the software side, so I I thinkthat we're going to see some

(13:48):
interesting things happen.
It's not necessarily as capitalintensive in all respects as
some people have thought it tobe, so that means smaller
players get to be part of thegame and innovate in ways that
maybe previously would haverequired only the largest
players, even a state actor.

(14:09):
I think that the idea that theUS will stay in the lead is a
great question.
I'm not so confident about that.
I'm optimistic that the US willplay an extremely important
role in AI, but I also thinkit's actually really good if we
have strong competition frommany parts of the world
including parts of the worldthat are not part of the
conversation today, but havecreative and capable people and

(14:31):
have enough capital to play indifferent layers of the stack,
and what I mean by that.
Layers of the stack is you havehardware, you have fundamental
architecture around models, youhave application layers, you
have delivery, you have services.
I think you're going to have alot of players in a lot of areas
.

Mallory (14:48):
Amit, I know at your previous company, your AMS
company, you worked globallyright, or the AMS company was
was working with globalassociations.
Do you have kind of a lot ofknowledge on associations based
in Europe, associations based inAsia and kind of like how those
might be similar to US basedassociations?

Amith (15:32):
no-transcript.
There's an education centricfocus, there's a publishing
centric focus.
There's a focus aroundcommunity conversation, those
types of things, the things weknow to be association
activities and associationbehaviors here in the US, in

(15:53):
other parts of the world, whilethose behaviors and activities
also are evident, you see a lotmore focus on government and
regulatory type of activities.
So you certainly see that inEast Asia, you see that in terms
of the trade association side,you see some heavy focus there,
a little bit less focus, in myexperience, on the education and

(16:15):
conferences side, althoughthat's growing In Europe.
I think there's a high level ofsimilarity even in the
non-English speaking parts withrespect to the focus on
education.
But there's definitely thetrade side.
The government side, I think,is stronger.
Perhaps that's in part becauseof the nature of regulation in
those parts of the world being abigger factor for some people.

(16:37):
But what I would tell you isthat the common thread around
connecting people with eachother and with great content is
definitely the through line forall associations that I've
encountered around the world.
So I do think that allassociations will be affected by
AI in their region.
And what I also think is reallyimportant is not all models or

(17:00):
not all AIs are the best foreveryone.
So when we say like, is the USleading in AI?
Well, in certain benchmarkscertain models might lead the
world.
O3 from OpenAI is the bestacross the board Well, not
across the board literally, butin many categories, certainly in
academic style competitions.
But it doesn't necessarily meanit's the best model for an

(17:22):
association or a nonprofit inIndia or Vietnam to use.
There could be models that aremuch smaller in terms of their
overall size and perhaps not aspowerful on an absolute basis,
that are much more effective athandling the needs of
associations in those regions,and maybe they'll be homegrown,
maybe they'll be using some ofthe open source, you know
componentry that's out there andbuilt on top of that.

(17:44):
So you know, the questionreally, ultimately, is what are
you trying to achieve?

Mallory (17:48):
I want to share a quote from Demis Hassabis, who we've
talked about on this podcastmany a time.
He is the co-founder and CEO ofGoogle DeepMind, and this is
the quote.
This is about the AI Summit.
Ai is a dual-purpose technology.
As we work to reap its benefits, we have to address two central
questions.
The first is about risks frombad actors or people who would

(18:10):
use this technology formalicious ends.
If we really want to unlock agolden era of progress, we have
to enable good actors tocontinue to make advancements
while restricting access towould-be bad actors.
The second question is how weensure we stay ahead of novel
risks that could arise as weapproach AGI.
This includes things likedeceptive misalignment, which we

(18:33):
discuss in our recent update tothe Frontier Safety Framework.
These concerns are not far offor far-fetched, nor are they
limited to one particulargeography.
They are global concerns thatrequire focused international
cooperation.
So just at a glance, Amit Iknow you shared this with me on
LinkedIn.
Do you kind of agree with thesentiments that Demis is raising

(18:54):
?

Amith (18:55):
I agree with them in part .
I agree that the concern shouldbe taken seriously globally.
I agree that we absolutely needframeworks and we need shared
agreements and thinking aroundhow to manage this, how to
evaluate models, powerperformance.
What I don't agree with,because I don't understand the
practical way of doing it, isthe containment idea, and we've

(19:15):
talked about this on the podmany times.
We haven't talked about itprobably in a few months.
But this idea that we cansomehow contain this beast and
limit access is questionable.
So containment as a theoreticalconstruct is great If we could
somehow say listen, the mostadvanced forms of AI, the most

(19:38):
capable tools that indeed couldbe used to cause the greatest
harms by bad actors, couldindeed be contained, put in a
box and only made available topeople with good intentions.
In theory, that sounds good.
The first issue with it is howdo you do that?

(19:59):
How do you do that when opensource AI is almost as good as
the very best AI, right?
So you put a tool that isnearly as good as the best tool
in the hands of someone who'screative and has negative
intentions.
They're going to figure out howto do some damage with it.
So I don't know how you put thegenie back in the bottle.
Essentially, I don't know howyou can do that.
You think about, like DeepSeekR1 being as good as O1 and

(20:21):
nearly as good as O3 in somebenchmarks, and this stuff is
getting better so quickly andyou can run it anywhere, you can
run it on computers you control.
There's just no practical waythat I can think of to actually
do containment, Unless you'resaying we're going to have a
regulatory framework thatessentially makes it illegal to
have open source models beyond acertain level of capability.

(20:41):
Of course there's a question ofenforceability.
How do you actually stop thatfrom happening?
Obviously there's anenforcement question because
around the world not everyoneagrees with that, and so if
China and other countries keepreleasing open source models
that are increasingly powerful,we could say all day long in the
US and the EU, wherever elseyou want to say, let's opt into
this, that we're not going toallow open source models beyond

(21:02):
a certain level or a certainthreshold.
But all that means is we'rejust going to be behind everyone
else.
So the geopolitical riskobviously is really high.
So that's the reason I thinkcontainment is a questionable
approach.
I I don't think you should stoptalking about it.
I think we need to keepdiscussing this and work
together as a global communityto try to figure it out.
I just don't believe that it'slikely that we're going to be

(21:23):
able to contain these things.
The second issue withcontainment is that there it is.
Therein lies an assumption thatthe good actors can be
identified and what theyconsider to be good is somehow
universal, which it is not.
That which is considered a goodintention in a Western country
might be considered the inverseof that in the Eastern world,

(21:46):
and vice versa.
And so you can't apply one wayof thinking about good versus
bad and come up with a globalframework for who are the good
actors.
Is it all nation states?
Well, not necessarily.
Is it certain nation states?
Well, therefore, it's the animpact.
And then agreeing on who getsaccess to the best.
That's going to be a giantproblem, and we're limited by

(22:19):
all of our usual biases andchallenges to come up with
solutions like that.
So that's really my long-windedway of saying I don't think
it's going to work.
That being said, AI safety isvery important.
I do think we need to haveframeworks, we need to have ways
of measuring this, we need tohave ways of detecting malicious
use.
But I believe and I've said thisthe whole time the only true
defense we have against the baduse of AI, or bad AI, is a lot

(22:44):
of really good AI, a long-termrace.
We're going to have to keepgoing at it and going as fast
and as hard as we can,effectively and definitely
because the frontier of thisessentially has no boundaries as
far as we know.
So the only way we're going tobe able to detect and stop bad
AI as a practical matter, Ithink, is with lots of really

(23:04):
good AI and essentiallyincreasing our whether you're
talking about cybersecurity orphysical security having a lot
of very high horsepower,well-intended AI run by
well-intended people protectingthose things.
So that's my point of view.
Now, I don't have a ton ofexperience in this.
I don't think anybody else doesactually as a practical matter,

(23:25):
because this is all new.
But that's also limited mybiases.
I just don't see how it'spossible.
But I would love to be wrongabout this.

Mallory (23:33):
Yeah, I think it's a really smart point you make,
though, about the subjectivityof good versus bad, because even
something seemingly as simpleas human rights is disagreed
upon across the world.
What are basic human rights?
What are humans entitled to?
So I think that's a reallysmart point.
I think and it's interestingthat for you, that's what stood
out.

(23:53):
For me, what stood out isdeceptive misalignment, I think
because I was not so familiarwith that phrase, but I wanted
to define it for our listenersit's a situation where
artificial intelligence appearsto be aligned with its human
designers goals during training,but it's actually hiding its
true, misaligned intentions.
That's terrifying.

(24:14):
I mean, that sounds like thepremise of several sci-fi movies
I've actually seen.
So is that something you thinkabout, amit, deceptive
misalignment and is thatsomething our listeners need to
be thinking about?

Amith (24:26):
Well, the question would be.
So, ultimately, the way I thinkthat statement was constructed
assumes there is a person orpeople behind a model that is
aligned in a way that actuallyhas an alternate agenda right.
So it appears to be behavingand working with you in a
certain way, but really its goalis mass surveillance or its

(24:47):
goal is to give you advicethat's not quite the best.
So think about, like somethingthat's giving you just slightly
advice that's not so obviouslybad.
Like you say, hey, I'm reallytrying to work on my nutrition,
I want to be healthier.
It's like, oh, go to McDonald's, drink as many sodas and as
many beers as you can.
So it's like, okay, well, thatprobably wouldn't be believable.
What if it gave me slightly badadvice at each turn that I

(25:08):
wasn't able to detect?
And what if it gave me?
slightly bad advice at each turnthat I wasn't able to detect,
and over time, it was able togive me really bad advice,
because I bought into this thingbasically being you know,
without fail, the right answer.
That's a problem.
Now, why would an AI do that,right?
Well, an AI could have beentrained on poor quality content,
but that's not a misalignmentthing.
It just has bad insights, right?

(25:29):
The idea of intentionallydeceptive misalignment is one
where a person, essentially orpeople would have to say, hey,
I'm going to train an AI tobehave one way, but really it's
optimizing for something else.
Really, what it's optimizingfor is I want to basically make
the public health of thatcountry terrible.
So, very slowly, I'm going totell them to adopt really bad
habits, right?
I mean something as ridiculousas that.

(25:50):
Theoretically, of course, itcould happen.
And what I always tell peopleto adopt really bad habits,
right, I mean as something asridiculous as that,
theoretically, of course, itcould happen.
And what I always tell peopleto be watchful of is kind of
understanding the provenance ofthe technology they're using.
So let's take an example.
These meeting note takers thatare out there have made the joke
on this pod and a lot of otherplaces.
You're pretty secure about theway you think about your

(26:11):
documents in your Microsoft orGoogle repositories or in boxcom
or Dropbox or whatever right.
People are pretty thoughtfulabout that.
Most people use two-factor ormulti-factor authentication.
Yet in meetings, where we oftendiscuss some of the most
sensitive topics that are reallyreally critical to retain
privacy around, we kind of letany meeting note taker in.

(26:33):
I don't I personally declineevery single one that I see
coming in.
I'm like, no, I'm not going todo that.
If I'm the meeting host, I'llget AI notes out of my platform
because I trust it at least tosome degree, but I do not trust
random meeting tools, meetingnote takers that pop into my
meetings.
Now it's socially a little bitawkward to say, no, I just
kicked out your meeting notetaker, but just get used to it.

(26:54):
But people have this differentperception on that particular
tool because it's in the momentyou have to say no to something
and it's a little bituncomfortable.
Perhaps, but how do you knowwhere readai or meeting note
taker dot?
Whatever I've made the jokebefore.
Maybe it's like North Korea'sfree app to try to surveil as
much meeting data, and Iwouldn't be surprised if there's

(27:15):
some state actors behind someof these free tools, because why
wouldn't you do that, right?
If you're trying to do that,trying to get as much intel as
you can on a population, givethem a free tool that gets
adopted by millions of peopleand you have access to not just
the text but their actual voices.
You can clone their voices.
There's all sorts of bad thingsyou can do.
So be thoughtful about theprovenance of the tech you use

(27:36):
is the first thing, and also bethoughtful about what is free
and what is not.
So if a product is free, youare not the customer, you are
the product, right, and you justhave to remember that.
That's true in social media andit's also true with these tools
.
So I think it's important tohave just a basic rubric for
evaluation for these types oftools.

(27:57):
Say, like, who built it?
Where is the company based?
Where's the data housed?
Am I paying for it?
Am I not paying for it?
Is there a clear set of termsof use?
I'm not saying you should nevergo to a startup, but you should
never go to a startup, but youshould perhaps consider
companies that have a little bitof a track record before you
throw your most sensitive datain.
So there's some things likethat that I think are important.
And then coming back to thisquestion of deceptive

(28:19):
misalignment, you're probablyless likely to be subject to
that kind of problem if you'reusing tools that came from a
source that you trust.
So I think it comes back tocybersecurity, but it also comes
to this potential issue.
I'm sure there are people outthere thinking of how to build
models and throw them out therethat do all sorts of bad things
right.
So sometimes they're notnecessarily state actors.

(28:42):
They might be some teenager ina basement that's bored on a
weekend that decided tofine-tune Lama 3.3 to do screwy
things like this.
Put it out in the world.
So you just have to be careful.
I don't think that means youstop using AI.
I don't think it's one of thesethings for you.
You know, run for the hills.
That would be an even biggerproblem, because then you will

(29:03):
be, you know, the Luddite'sLuddite, and you will be 100%
unemployed and your associationwill not be around.
So you won't have to worryabout these issues on the one
hand, but you'll have otherproblems to worry about.
So I would suggest that youlearn this stuff and figure it
out.
But this deceptive misalignmentconcept it's definitely worth
thinking about, like what couldsomeone do with AI if they had,
you know, a real bad attitudeabout the world?

(29:25):
They could do a lot of damage.

Mallory (29:27):
Moving to topic two, which is Reuters' legal victory
in a copyright infringement case, thomson Reuters has achieved a
significant victory in acopyright infringement case
against Ross Intelligence,marking a pivotal moment in the
copyright infringement caseagainst Ross Intelligence,
marking a pivotal moment in theongoing debate over the use of
copyrighted material in AItraining.

(29:47):
The ruling was delivered onFebruary 11th of this year and
it confirmed that RossIntelligence unlawfully used
content from Thomson Reuters'Westlaw Legal Research platform
to develop its own AI toolswithout permission.
To give you some background,the legal battle began in 2020
when Thomson Reuters filed alawsuit against Ross
Intelligence, alleging that thenow-defunct startup had copied

(30:09):
and utilized its proprietarycontent to create a competing
AI-based legal research tool.
The core of the disputecentered around the concept of
fair use, which is a legaldoctrine that allows for limited
use of copyrighted materialunder certain conditions, such
as for educational ortransformative purposes.
In his ruling, us DistrictCourt Judge Bibas decisively

(30:32):
rejected all of Ross's defensesregarding fair use.
The judge emphasized thatRoss's use of Thomson Reuters
headnotes, which are summariesof legal cases, was not
transformative and was intendedto create a direct competitor to
Westlaw.
The distinction is crucial, asit underscores that the court
viewed Ross's actions ascommercial exploitation rather

(30:53):
than legitimate fair use.
The case is part of a largerwave of litigation, as we know,
involving AI companies andcopyright holders, as many firms
face lawsuits for allegedlyusing copyrighted works without
permission to train their models.
Although this ruling is seen asa win for Thomson Reuters,
legal experts, caution that itmay not have broad implications

(31:14):
for other ongoing casesinvolving generative AI
technologies.
The key difference lies in thenature of the technology
involved.
Ross was not developinggenerative AI, but rather using
existing legal texts to provideresponses to user queries.
Legal analysts are suggestingthat, while this ruling may
influence some aspects of futurelitigation, especially

(31:35):
regarding non-transformativeuses of copyrighted material,
each case will ultimately bejudged on its specific facts and
circumstances.
The outcome may provide somereassurance to plaintiffs like
authors and publishers, who arecurrently suing major firms like
OpenAI for similar copyrightissues.
So, amit, we haven't talkedabout this in a long time.
I want to say one of the veryfirst episodes we did on the

(31:57):
podcast was around some of theselawsuits with the New York
Times and OpenAI.
So what's your takeaway hearingthis?
I know we kind of cautioned.
You can't apply this broadly toall of these lawsuits, but
what's your thought here?

Amith (32:12):
So this particular case, as you pointed out, is different
because it's not a generativeAI scenario.
They are using AI techniques,but they are essentially
providing what is heavilyoverlapping with what Westlaw
provides directly to consumers.
That which is competitiveusually isn't fair use.

Mallory (32:29):
I'm not a copyright attorney.

Amith (32:31):
But when it comes down to something that's kind of
abundantly obvious that if youuse product A you no longer need
product B, right, the classicaldefinition of substitute goods.
Generally speaking, it's hardfor the argument around fair use
to hold water, because theintention of fair use is to
allow for derivative works to beable to cite or to be able to
be influenced by other works,but not to be able to use them

(32:52):
in such a substantial form thatit undermines the need for the
original work.
And so clearly here that's thecase.
Now you could argue thatgenerative AI most certainly is
a substitute good for classicalsearch-based tools like what
Westlaw has done and Lexis hasdone as well in the legal space
for a lot of years, and tons ofother information services like
that exist, including those thatare provided by associations.

(33:14):
So you could argue that kind ofgenerally the substitute good
piece is not upheld becausethese products do supplant and
eliminate essentially the needfor the original product.
These models work is they dopre-training, where they ingest
literally the whole work, allthe works from all the internet,

(33:42):
right, and so the way the modelworks essentially is it's kind
of like a fill-in-the-blankexercise, if you remember those
tests from the Wayback Machine,like even before PsyCar, right.
So you know it's like, yeah, along time ago.
So it would be like you knowyou have a sentence and there's
a missing word or a missingcouple words and you'd have to
fill it in.
And that is oftentimes the typeof test people take,

(34:03):
particularly in grade school,but even later on in middle and
high school.
I think that's a pretty commonthing and that's kind of what's
happening when you dopre-training.
You're saying, hey, here's asentence, here's another
sentence, and then the model hasto essentially attempt to
complete the sentence or theparagraph and bigger chunks as
well, and then when it does itcorrectly, it gets rewarded and

(34:24):
that strengthens those weights.
And when it does it incorrectly, it kind of has to go back and
it has this whole process thatit's going through and that
process doesn't store theoriginal content, it's just
learning from the content andusing it to basically build
these weights.
Now, by virtue of doing thatover and over and over again,
trillions of times, essentiallyyou end up with a model that has

(34:47):
inherent knowledge of a lot ofunderlying works because it's
able to predict the content atsuch a high level of accuracy
that it seems like it has astored copy of a given novel or
a given article from the NewYork Times, but indeed it does
not.
As a practical matter it seemslike it does.
It feels like it's compressedall that knowledge into

(35:09):
something tiny, which in a senseit has, because it has the
essence of that knowledge in it,but it's distinct from the idea
of truly copying it, which iswhat's happened in the case that
you referred to.
So at a technical level it's afundamentally different approach
.
That being said, the questionwill really be like with the
OpenAI New York Times suit.

(35:29):
If a similar finding is upheldin that case where fair use is
not considered a valid argument,that'll probably have much
greater implications.
But this might be perhaps apreview of that.
I don't know.
My point of view is prettystraightforward on this that
associations should seek toincrease the clarity with which

(35:52):
they communicate.
The materials that arecopyrighted protect themselves
by clearly denoting that to bethe case.
Not that copyright necessarilyrequires that, but it's helpful.
And I would also point out thatthe content that you have kind
of behind the paywall thatshould never have been scanned
by any AI because it's protectedby a barrier.

(36:14):
If someone went through apaywall, like used a paying
account and then grabbed allyour content and then sucked it
in.
That violates a whole bunch ofother legal issues, right, in
terms of they had a contractwith you and then they went
through and stole the content.
That's a different issue thansomething publicly on the web.
A lot of associations are in adifferent place because they

(36:34):
have historically not publishedall of their content for free on
the web.
In fact, I'd be willing to betthat it's a fairly small
minority of associations thatpublish all of their content
freely on the web.
Many of them have it as amember benefit.
They have their journalarchives and other things behind
a paywall.
So if you find that yourcontent has been somehow

(36:55):
misappropriated and is availablein an AI model and it's been
behind a paywall the whole time,I think that is something to
take a very serious look at,because that's a little bit
different than content that's inthe public domain.
So what we're referring to hereis content that has been in the
public domain.
In the case of the Westlawpiece, I don't believe that
their case summaries are in thepublic domain at all, so clearly
this company, ross, must havehad access.

(37:17):
Perhaps maybe it was through apartnership agreement, some kind
of contract.
So I think there's a lot ofdifferent layers to these things
, but what I can guarantee youto be true is that there will be
more and more of these casescoming up.
I think it's really important tohave a high quality copyright
attorney, not necessarily onyour payroll but at your
disposal, so that you get toknow someone who's competent in

(37:39):
this, in these matters, who canhelp kind of guide your thinking
on it, because I do think thatbeing thoughtful about when your
content potentially has beentaken from you to do something
about it is really important.
I mean, I've said this for along time that associations have
a couple of key assets that arelargely latent, one of which is
their content, the other istheir brand.

(38:00):
I don't think associationsleverage each of those assets
nearly to the extent that theycould to be able to activate it
so that you are the center ofthe universe in a way that no

(38:22):
one else can be in your space,right?
So what I mean by that centerof the universe comment is you
have resources and you have thebrand authority to be the
end-all, be-all solution in yourparticular narrow slice of the
world, right.
So in your vertical you havethis unbelievably great content
resource no one else has.
You should be able to monetizethat because you own it.
Now are there other contentsources that are perhaps 90% as

(38:44):
good as yours?
Maybe so, but if you do itright, then your content, being
the best content, should providea durable advantage.
But your brand as well, incombination with that, is a
really important thing toprotect.
So I think this is a reallyimportant topic for associations
to have in kind of the centerof their sites as they're
looking ahead, because you knowyou could very easily get run

(39:06):
over if you don't pay attentionto this.

Mallory (39:10):
You mentioned an example of finding out.
Maybe your content wasmisappropriated, particularly
paid content for an associationbehind a paywall.
Knowing, based on what you justsaid, that the models don't
have stored copies of thisinformation, how might an
association leader find out thattheir paid content was
misappropriated?

Amith (39:31):
Well, if you go have a conversation with a model and
you're asking a question thatyou know is directly, the only
way that it could be derived isthat there is a journal article
that has always been behind apaywall and there is something
in there that you know is onlyin there, right, which is a hard
thing to prove definitivelythat that content doesn't exist

(39:52):
somewhere else.
But essentially, what I thinkis becoming likely to be the
outcome for some of this is thatthese AI companies are
increasingly looking at theirtraining sources something that
they should disclose saying, hey, this is where we got all the
content for our training.
So I think you can attempt totest the models, the way you

(40:14):
interpret the results, becausethese models are so good and
they've had a lot of very broadtraining data that even if they
didn't have access to yourcontent directly, they may be
quite good at your field.
So you have to be very, veryspecific talking about a
particular issue in a particulararticle and you know and even,
by the way, even if the modelwas trained on your content,

(40:35):
that doesn't mean it's going tohave perfect recall of that
article.
So you might say, oh, we'regood because the model didn't
know about that particulararticle, that actually doesn't
prove it either, that it didn'thave access to content that it
shouldn't have.
There's another issue that'srelated to this that I think
also makes this more interestingand more challenging, which is
that increasingly, models arebeing trained on synthetic

(40:58):
content.
Increasingly, models are beingtrained on synthetic content.
So you take a model like LAMA3.1 405b, which is the 405
billion parameter edition of theLAMA 3.1 model, which was
released just under a year ago Ithink it was April of last year
and that model was used togenerate a lot of the content,
which, in turn, trained the LAMA3.3, 70 billion parameter model

(41:19):
, which is obviously muchsmaller and has been used to
train countless other models, insome cases from scratch models
that were not based on thepre-training of the LAMA series
of models, but just using LAMA405b to do content generation,
synthetic content generationwhich, in turn, was allowing
other models to be trained fromscratch.

(41:40):
So, as these derivative modelsare based on more and more
synthetic content, which is alarge part of the game now, it's
a really good question how doyou trace the roots?
All the way back, it becomesharder and harder and harder to
do that.
So, on the one hand, I thinkit's a really important topic to
be up to speed on and payattention to.
On the other hand, I don't knowhow good the defense will be in

(42:02):
this category, because themodels have already.
It's kind of like the firstthing we talked, the first topic
.
You know, the trains left thestation.
The models are out there in theworld and there's models that
are out there that aresufficiently smart and capable
of producing good content, thatare free, and there's lots of
them.
It's not just Llama, that'sjust the one that's top of mind.
So I think that you have to lookat it from the viewpoint of

(42:24):
just being aware of this as muchas anything else.
The last point I'll make on itjust real quick, is that
associations have often beencontent thinking and in some
cases knowing that their contentis the best in a given field.
But if there's other contentout there that's 95% as good but
it's free or it's just easierto access than yours, what will

(42:46):
most people choose?
You know they'll likely go downthe low friction or no friction
path.
So that also, whether or notthat's right, whether or not it
was based on your content orit's just a competitive threat
from another attack angleultimately may not matter as
much, as what do you do about it.

Mallory (43:03):
I was on an association website I don't remember which
association, in case someone islistening to this and says, hey,
that's us and I saw a banner atthe top that said something
along the lines of no AI modelmay scrape our website or none
of the information here can beused to train any AI models.
I know neither of us arelawyers.
Do you feel like there's kindof any weight in doing something

(43:25):
like that on your website?
Or probably not.

Amith (43:29):
I don't have a legal opinion on that, because I think
you know there are.
There's kind of a precedent forthat with robotstxt, which is a
file you can drop on the rootof your website to tell Google
and other search engines not toindex your site.
So if you kind of assume thatthat somehow can lean into this
and say, ok, well, if you havean AItxt file that says, don't,
don't use my content fortraining, ok, at least you're

(43:50):
putting notice out there thatyou don't allow this.
But I don't think the argumentfrom the AI companies has been
that people didn't say it was aproblem, right?
That's not so much what theconversation's been.
I think that the bottom line ona lot of this stuff is when it
comes to what people are goingto do, similar to the AI safety

(44:11):
conversation and thiscontainment conversation from
the earlier topic.
It's going to be tough.
It's going to be tough tocontain it and the knowledge of
these AIs is already so great,so high, that it's going to be
very, very difficult to havesomething that has a true,
durable, competitivedifferentiator, unless, again,

(44:31):
your brand is that strong andyou continually work to
reinforce the quality of yourbrand and how you actually
position your brand in themarketplace, which is not
obvious and it's not automatic.
A lot of associations neglecttheir brands.
They don't even think abouttheir brand as a brand, they
just kind of go on in theirbusiness and they don't really
work to emphasize it.
And then, obviously, retaininga community of people who are

(44:53):
producing the great content.
To begin with, you say how didthis association get the best
content in topic A, b or C?
It's because they've been ableto attract the best minds in
that field, who almost alwaysvolunteer.
They don't get paid tocontribute that content, so
there's something magical aboutthat.
If you can keep doing that,then as the fields continue to
progress, and if you can put theright kinds of protections

(45:15):
whatever those may be aroundthose assets, you probably have
a protectable future.
It's hard to speculate beyondthat.

Mallory (45:25):
Yeah, and I want to wrap up here with that other
example you gave, which is,let's say, hypothetically, the
association has 100%, the bestcontent in an area, but maybe
there's a competitor that has,like it's at about a 95% in
terms of quality, but it's mucheasier to access.
How would you, amit, if youwere an association leader, like
, how would you think about that5% difference and how to

(45:48):
activate it, Like the 100%versus the 95%?
You kind of touched on thatwith pulling out the brightest
minds and continuing to make thebest, latest, most cutting edge
content at your association.
But how would you approach thatchallenge?

Amith (46:02):
Well, I think that associations need to stop
complaining about the fact thatthey're associations and that
they're not Amazon and notNetflix and they're not
Microsoft and so on.
Pick a company that'sconsidered the contemporary
leader in user experience, easeof use, simple, clean, elegant
product design, and you say, hey, association, how come your
website isn't as cool as thisone or as easy to use as that

(46:24):
one or as personalized as thisother one?
Right?
The common complaint is heylook, we're a small association,
we only have budget of X, wedon't have hundreds of millions
or billions of dollars to do R&Don this stuff, and that fact
remains the same.
However, it is not relevantbecause your consumer doesn't
care.
The consumer is going to dowhat's in their best interest,

(46:46):
increasingly so, and so what Imean by that is generationally.
There's a lot of data thatsuggests that this desire to
join and be part of something,simply because that's what you
do in your profession, islargely a boomer and Gen X thing
and is maybe a little bit inthe millennials, but it's really
starting to decline.
I don't know if that's an agething as people get older, they
want to belong to a group, butit just seems to be that the

(47:07):
fundamentals of why peoplebelong have changed and there's
a lot of data suggesting thatstrongly of why people belong
have changed, and there's a lotof data suggesting that strongly
.
So my question would be how doyou create the best consumer
experience if you havelimitations?
And I would simply point you tothis If you're creative about
it and if you're thorough andyou're focused, you can create a
lot of cool things and you canstand on the shoulders of giants

(47:27):
.
Great example is DeepSeek.
For $6 million, and in a matterof weeks, they created
something that is competitivewith something that costs
billions of dollars to build.
That took years.
How did they do that?
They stood on the shoulders ofOpenAI and everybody else's work
.
They didn't do it completelyindependently, in a vacuum.
And you too, as the associationleader, can indeed do that,
because technology is moreavailable, it's more affordable,

(47:49):
it's more accessible.
You just have to go, takeaction, you have to get started.
So complaining about it which,unfortunately, I hear far too
often in the boardroom, wherethe association and its board
are saying oh, why don't thesepeople just understand that
we're just a little associationor even just a big association,
we can't do it?
That's a losing argument.
That's a conversation youshould outlaw.
Just eliminate thatconversation.

(48:10):
Talk about, well, how can wetake baby steps, one day at a
time, to improve this thing.
And there's a lot of things youcan do.
For example, this idea of usinggenerative AI to answer
questions.
Right, maybe you've never had agood search experience, maybe
your AMS is older than I am,maybe you have all these other
things that feel like your handsheld behind your back or both

(48:31):
your hands and your feet aretied up.
But that doesn't mean you can'ttake advantage of some brand
new tools and do part of thesolution immediately right.
Put a generative AI front endon your website that can answer
questions at scale.
Make it possible for people tocontact you and have great
AI-powered customer service thatcan answer questions accurately
and immediately.
Do other things like that right.

(48:51):
Solve the problem one piece ata time.
A lot of times, replacing legacysystems is the last thing you
want to do.
I talk to people all the time.
They're like hey, amit, Ilistened to the pod or I went
through the course, it's reallycool and it's so awesome.
But yeah, we haven't doneanything yet.
We haven't done anything yetand I'm like what's up, Like how
come and the short answer isalmost always well, we've got

(49:13):
this really old AMS or we've gotthis LMS that's killing us, and
I'm like, yeah, so you're goingto spend two years and probably
seven figures to replace systemX, whatever it is, to get
system X plus one and be maybe10, 20, 30% better than you are
now.
It's not going to make thatmuch of a difference.
In the meantime, the world'smoving on.

(49:33):
So I would say you shouldprobably pause all those big
infrastructure projects for now.
Try to.
If you're literally bleeding todeath on the side of the road
because your AMS is thathorrible, maybe try to patch it
up a little bit, but focus on AI.
That's the stuff that's goingto change the game for you, and
it's really simple.
Actually, it's just the powerof the word no.

(49:53):
Associations aren't good atusing it and you need to stand
up and say no, we're not goingto do this other project, we're
not going to continue to upgradeAMSs every seven to 10 years
just because we're quote unquotedue for an upgrade, due for a
change.
It's not going to help.
You have to rethink the way youprioritize the resources that
you do have and you do haveresources, even the smallest

(50:14):
association.
You have resources, you haveyour time, you have your energy.
You probably have at least afew dollars to throw out the
thing, and AI keeps becomingcheaper and cheaper, so that's
good too.

Mallory (50:24):
Yep, this is your time to be creative, to stand on the
shoulders of AI giants out thereand, I've got to say it, to
stop thinking about your crustyold AMS, if you know.
You know that was from aprevious episode.
Everybody, thank you for tuningin.
This was a great Frenchinspired episode.
We are looking forward to seeyou next week.

Amith (50:48):
Thanks for tuning into Sidecar Sync this week.
Looking to dive deeper?
Download your free copy of ournew book Ascend Unlocking the
Power of AI for Associations atascendbookorg.
It's packed with insights topower your association's journey
with AI.
And remember, sidecar is herewith more resources, from
webinars to boot camps, to helpyou stay ahead in the

(51:11):
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.