All Episodes

July 3, 2025 30 mins

In this conversation, Michael Quinn interviews Nusrat Farook, founder and CEO of effectivRAI (www.effectivRAI.com), about the importance of responsible AI in corporate settings. They discuss the trust and responsibility gaps in AI implementation, the varying mindsets of leadership regarding AI, and the critical need for robust governance and training within organizations. Nus emphasizes the role of responsible AI as a defense against cybersecurity threats and the necessity of external audits to build trust. The conversation also touches on the digital divide in AI adoption between the global north and south, highlighting the need for localized AI solutions.

Keywords

Responsible AI, Cybersecurity, AI Implementation, Leadership, Trust Gap, Digital Divide, AI Governance, AI Ethics, Corporate Strategy, AI Audits

Takeaways

EffectivRAI addresses the trust and responsibility gap in AI.

Corporate leadership is at different stages of AI readiness.

Mindset change is crucial for AI implementation.

Responsible AI can serve as a cybersecurity asset.

Understanding different types of AI is essential for governance.

Internal and external risks must be managed in AI.

External audits build trust in AI systems.

Brand trust is linked to responsible AI practices.

Communication between boards and management is vital.

The digital divide affects AI adoption globally.

 

Titles

Bridging the Trust Gap in AI

The Role of Leadership in AI Implementation

 

Sound bites

"The speed is the key here."

"Communication is the key there."

"We need to get digitized."

 

 

Chapters

00:00 Introduction to effectivRAI

00:56 Identifying the Trust and Responsibility Gap

03:13 Understanding Leadership Awareness and Mindset Change

07:23 The Intersection of Responsible AI and Cybersecurity

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:04):
Welcome to the Brand AI Report podcast, Boards, Bots in the Bottom Line.
I'm your host, Michael Quinn, editor of the Brand AI Report and a Google certified gen AIleader.
If you're an executive trying to figure out how to implement AI without putting your brandat risk, you are not alone.
Each episode of this series is meant to be a building block towards solving thatchallenge.

(00:29):
Real conversations with leaders who've been where you are and found a way forward.
My guest today is Noos Rott-Farouk, founder and CEO of Effective RAI, a new US-basedadvisory firm for large organizations to understand and adopt responsible AI frameworks
and processes.
Noos and her remarkable team have combined experience at the UN, the World Bank, the WorldTrade Organization, Verizon, GE, IBM, and large philanthropies in senior roles in

(01:00):
management consulting, cybersecurity, and of course,
responsible AI news is an engineer.
She also holds a master's in public policy from Columbia and will soon get her PhD ininternational security from Yale.
Amazing.
Welcome.
This is our inaugural podcast for the brand AI report.
So thank you so much for, being a willing participant.

(01:22):
Why don't we start news with, telling us about effective R AI and what you saw, what needyou saw that, that made effective R AI seem like the solution.
Let's say, well, my business partner and I, Dr.
Derek Leibert, he and I met in, I think in 2024 at a conference or something at an annualColumbia conference in Washington, DC.

(01:46):
And we became really good friends.
We started talking about AI.
We started talking about Columbia AI.
we realized that, I mean, Derek, he's a little bit about him.
He's a consulting genius.
He is the founder of multiple consulting firms.
He has seven books on leadership in his name.

(02:08):
He's amazing.
He and I had such an amazing mind meld that we realized that the way he thinks, I thinktoo, in many, many ways.
And so we combined his consulting genius and my engineering tech counterterrorismbackground.
And we realized that there was a trust gap and a responsibility gap when it came tocorporate uh

(02:29):
deployment and implementation of artificial intelligence.
And so the gap was that the corporate leadership seemed to be really aggressive in pursuitof implementing AI and deploying AI.
But on the other hand, the clients, the customers of these big brands were hesitant andwere not as responding to that aggression from the executives.

(02:51):
And so there's this disconnect between the leadership and the client and the customers.
And so you see that
This disconnect is really dangerous, but we realized that we could convert that gap thatinto a powerful asset.
So we combine trust and responsibility and make that something, make that into somethingthat's an asset to the corporate leadership.

(03:13):
So that's why effective RAI, which is effective use of responsible and trustworthy AI.
So we decided, okay, let's, let's give it a shape of a firm and, and see how we go aboutit.
See, see what the boards and C-suite have to.
Say about it.
That's so interesting.
Do you find that they are aware that the CEO's C-suite, the boards of clients realize thatthey have that gap, that there's a discrepancy or at least space between let's get to AI

(03:45):
implementation and wait, wait, there's some groundwork to cover for.
some extent they do.
So in some instructions, the top leadership, the top management is not, it's stillprocessing, it's still processing AI, responsible AI, whether to implement, whether to not
implement, what does it mean?
How does it shape our goals?
How does, is there a return on investment?

(04:05):
And in some cases, some leadership are really ready and they're like, they want to, butthen they don't know how.
And so there is like, the leadership is
at different stages we see and so that's why we have different solutions for differentstages of where the leadership in a corporate is.
Do they, I'm curious what that conversation is like when you're broaching the solution tothat gap that they perceive, that they know that they need ROI, that there's pressure in

(04:33):
the business, competitive pressure, and maybe even within the organization about differentprojects vying for attention or funds for AI pilots.
Is there a magnitude or an order of threats or benefits that you present to them that...
uh that they can consider and if not cherry pick at least know where to how to begin thisprocess with you.

(04:55):
First, so to begin with us or to even begin, I think what the leadership needs to do isthey have a mindset change.
What does that mean, a mindset change?
You have to start that conversation within the board or within the C-suite to act as acollective, to act as a steering committee in order to bring whatever you want to bring

(05:15):
with respect to AI or responsible AI.
We have seen a lot of AI implementation, but we haven't, we're not seeing a lot ofresponsible AI implementation.
However, now the governments and the regulators are coming out and saying, we want to seesome responsibility.
We want some compliance.
And so we're going to be aggressive about applying these regulations, whether you like ornot.
But you see in different regions, Europe is very much aggressive about regulations.

(05:40):
But the United States is a little giving free hand in terms of that, because the Trumpadministration wants to implement AI.
They want the...
they have issued a memo, it's M2521 memo, which basically guides the federal agencies toimplement AI, responsible AI.
So there are a lot of steps happening on that front.

(06:01):
In terms of threats and benefits, so the corporate leadership, the organizations, thebrands, they do realize that there is an immense threat and therefore the implementation
of responsible AI should be immediate.
But then what that looks like, they don't know.
What are the costs and benefits of that?
They are still juggling with that.
Some brands, however, have been agile about implementing and changing to responsible AIand they have already those processes and systems in place.

(06:29):
But that is the key.
The speed is the key here.
Otherwise the brands would become irrelevant and obsolete because technology is moving sofuriously fast that if you do not adapt right now,
you're going to look obsolete in a few years or even a few months from now.
We don't know.

(06:49):
So we start with the mindset change.
The top leadership has to be up as a collective.
And then whichever direction they want to take, it say, for example, it could be legal orregulatory compliance that they want to check their responsible AI capabilities in.
It could be their data integrity.
It could be their ethics.
uh driven development.

(07:10):
could be cyber security, it could be maybe crisis response protocols.
It could be something else that the sector is worried about.
They do have to, so they do worry and I think they need to do the cost benefit analysis.
Two things that you said really are super interesting.
It's all really interesting, but the idea that responsible AI oh is a solution and adefense against cyber or an asset for cybersecurity, a defense against cyber threat.

(07:41):
was wondering if you could talk a little bit about that since so much of responsible AI isethics and transparency, observability and fairness, and yet cybersecurity is
seems much more dark and maybe that maybe that's the maybe that's the equation it's alight and dark thing but I'm sure it's more complicated than that.
not a cyber security expert, but I can talk about the digital counterterrorism side ofthings because I've worked in digital counterterrorism and social media and terrorism,

(08:08):
terroristic content and violent extremist content is sort of my forte.
But you can parallelize what I speak in terms of cyber security as well.
So I'm going to parallel those two together when I speak to your question.
I think it's really important to understand the difference between NATO AI, generative AIand agentic AI.
So, NATO AI is just the traditional AI, the predictive AI that we all know.

(08:33):
The examples of that are facial recognition, likelihood of being a good candidate for ajob, et cetera, et cetera.
A generative AI comes in, and the difference between that of AI and generative AI isbasically that NATO AI, the outputs are how data scientists build that AI algorithm,
predictive analysis.
But generative AI, the outputs are largely a function of the prompts that

(08:55):
as an end user, you give it to gendered of AI.
So you change the prompt, the output is changed.
So therefore we need to embed responsibility into that prompting.
What do I mean?
For example, you use chat GPT or you use any other LLM, right?
Anybody, take anybody.
So if they put in sensitive company information, the LLM is gonna basically process thatinformation and send that to that third party that the LLM

(09:23):
belongs to and the result is that LLM learns on the data that it gets from the users.
So you need to be responsible.
You need to be trained in terms of how you're interacting, even if you're interacting on apersonal level with an LLM, but you're using it for your work, but you're, but you're
inputting sensitive data into the LLM.
So you are actually, and nobody knows.

(09:44):
So a company needs to train.
that employees needs to train, comes, it trickles down from the top.
So you need to, first you need to do the, have that mindset change, and then you need totrickle it down, the mindset change into your processes and systems, but also into your
employees because how they're using your sensitive data, you have no idea.
So, and then, okay, so, not of AI, agenda of AI and agentic AI.

(10:07):
Agentic AI goes one step further.
Now, agent, AI agents are agentic AI.
There's no human end user communicating with the LLM.
It is basically a bot.
is that which is communicating with the LLM on the human's behalf.
The human has given certain instructions to the bot, like, okay, I want this, this, this.
I want this research.

(10:27):
want profile information of this person, that person, or this president, that primeminister.
And so the bot is basically interacting with the LLM.
And you can imagine the agentic AI ecosystem as something like a factory.
Imagine a factory.
A factory has a lot of workers in it.
and the workers are working in different workstations and those workers do interact witheach other.

(10:50):
Now imagine agents, AI agents, they are working, you have deployed AI agents and theagents, you can cross-connect them, you can make them interact with each other and they're
interacting with each other and they're working on different workstations and all of thatfactory is in your laptop basically.
So you can imagine how bad actors, be it in cybersecurity or terrorism, how those badactors can use

(11:13):
gendered AI and agentic AI, let alone narrow AI, how they can misuse all of this to attacksensitive targets.
vulnerable targets.
there is, Google has come up with an agent AI ecosystem, but it's not open to public yet.
It's only open to certain companies, certain organizations, individuals and relevance AI.

(11:33):
What relevance AI has done, they have basically gamified making AI agents.
I made my own AI agents and I basically created an executive assistant for myself.
I gave it a lot of tasks.
Okay.
You need to do this, this, this, and then you have to read, like you have to assignreasoning.
to it, like this is how you're going to go about doing research for me.

(11:53):
And this is how you're going to go about drafting emails for me, but do check with mefirst before sending an email to someone.
And then, but you have to give it, give this agent access to your own Gmail, your owncalendar, your own notes or thoughts.
All of that information, all of that sensitive information is going to a third party.
So I personally am scared to give a bot, a third party bot.

(12:16):
access to my emails.
So I can imagine how much scare there is among customers and clients to getting used tothose systems which run on AI, Gen.
AI or Agentic AI, let alone the the Agentic AI ecosystem that is something new.
Is the, thank you for that.
the, is the role of responsible AI then for say the, the corporate owners and leadership,the board, vis-a-vis the agents, agentic AI to simply have a more robust governance

(12:47):
process so that if each individual laptop is a factory, that there is a, that there's aprocess for governance and observability to, make sure that it's not necessarily bad
actors who are operating the agents within the company.
It's simply the, the
work is hectic and the pace is fast and sometimes the, you know, that's to be observed andsafeguarded may not be for the sake of expediency.

(13:14):
Is that part of the responsible AI benefit to corporate clients to not only create butcondone and encourage that kind of governance on a day-to-day basis?
Absolutely.
I mean, on one hand, yes, you need to have robust governance within your own corporation.
But then what do you do about certain cases where an employee has two laptops?

(13:35):
And it's not the employee's fault.
They're using the personal laptop to use ChatGPT.
But then they can very well just see what is the output from ChatGPT and then copy orwrite it down on their own, on the work laptop.
So
So there are a lot of scenarios when it comes to that, you know, if we're talking aboutcybersecurity or counterterrorism, but generally, if we look at a general corporation, I

(13:58):
mean, you need to train your employees, you need to train your engineers, you need totrain your executives, your product managers, your legal teams in, would say, encourage
using generative AI, but then encourage it within your system.
Look, employees are already using it, okay, but you need to have certain protocols inplace.
Because we have seen so many, there's just one tiny message on one laptop and then thewhole system is basically gone.

(14:26):
We have seen such kind of cybersecurity incidents in the real recent past.
So I think training and skill building the employees would be great in that scenario.
Got it.
The cybersecurity is, of course a major concern, but I don't, at least I don't see anawful lot of that conversation going on outside of cybersecurity experts is so much of the

(14:47):
goal with AI, agentic AI is efficiency and, and coworkers to improve productivity andautomation.
Um, are there, if cybersecurity is, is one threat that the board and management needs tobe aware of and more aware of, and also more conversant in, are there, are there other
threats or risks?
that need to be mitigated when approaching an AI-powered organization.

(15:09):
Enormous.
So there are two kinds of risks, I would say that are internal risks.
And then there are external risks.
The internal risks come from the algorithms, the data, the bias data within theorganization, the messy data within the organization.
And then if you want to design your own LLM, if you want to use a third party LLM, thedata that you're using, you're giving to the LLM.

(15:33):
You have to make sure that that data is unbiased.
Those are sort of, you know, biased data or internal threats.
And the external threats are such as in cybersecurity, there's a threat actor who wants toaccess your systems and who wants to shut down your systems or get sensitive information.
Those are the kinds of external threats.
So there are those are the two.
So I would say in both the internal and the external threats, something that we do, wesolution that we at Effective RAI is end to end third party RAI audit.

(16:03):
I was talking about it earlier as well.
We have a six stage process to do
these end-to-end audits where we help, first define the scope and purpose of an audit fora firm, for a corporation.
We determine the client's needs.
We determine the client's expectations in terms of internal threats versus externalthreats.

(16:24):
And then we do sort of like a, have certain expertise areas that we do audits in, such as,I was talking about it earlier as well, data integrity, political risk, business
effectiveness.
transparency, accountability, cybersecurity, terrorism, legal or regulatory compliance.
And then what we do is we attach a domain expert to either of those areas.

(16:48):
And then we drill down, evaluate the evidence that the corporation has, and then we reviewthe documentation.
And we assess the risks and opportunities, whether it's an internal risk or an externalrisk, internal or external risk.
And then we basically that expert then consolidates the whole work and providesrecommendations.
And then we shared the results with the top management, but it is important to have anexternal audit because that builds trust and validity to the kind of work that the

(17:18):
organization is doing.
Otherwise we're seeing a lot of, I mean, we have seen cases where we have seen crisis,trust crisis between the brand and the customer.
I'm you brought that up.
I was thinking about brand in no small part because that's in our name, the Brand AIReport, and customers and how Generative AI or Gentic AI are either behind the scenes or

(17:42):
directly engaging with customers.
Is there a threat to how a company is using Generative AI and Gentic AI to the brand?
Of course there are tests and benefits and opportunities.
Could you be more specific?
Sure, sure, sure.
You know, I think that in my own understanding of responsible AI and brand that they sharecommon values of transparency and do no harm and some pretty basic foundational values to

(18:08):
really any brand that are much more forward when it comes to responsible AI in the senseof a visual of a Venn diagram.
I see those two things as overlapping brand and the purpose of responsible AI.
And I'm wondering if, if the, the absence of responsible AI that there is in the AIpowered marketplace, a risk to the brand that organizations may not be considering or at

(18:35):
least have not, you know, detailed out, haven't defined yet, other than a general sense,an obvious sense that AI generative AI that is engaging with our customers are in fact
authenticating and representing what we stand for.
as an organization, it's representing whether there's a human in the loop or not, whetherit's assisting customer service or actually assisting customers on an e-commerce site

(18:57):
without a human in the loop.
So the processes for ensuring safeguarding governing those AI coworkers or AI tools wouldvery much so be a feature of responsible AI, I would think.
And also think that that would be a process.
So that's what I was driving at.
No, you're absolutely right.
And I love Venn diagrams.

(19:18):
As an engineer, I love drawings and charts and graphs.
um And it actually makes things clear.
Taking that Venn diagram, think, you know, the brand and the purpose of responsible AI isnot just this overlap.
It is like this overlap for me, like the full overlap for me, if you want to deploy, whichyou should deploy because you have a

(19:40):
You have no choice.
So you need to deploy trustworthy and responsible AI.
I was reading this Apple credit card launch story in August 2019.
This happened in August 2019 when the Apple credit card was launched.
And in a few months itself, there was a major crisis.

(20:02):
Users noticed that the card basically offered smaller lines of credit to women than men.
So you can see there's an algorithm at work without the human in the loop.
Okay.
And so the scandal spread on Twitter, Wall Street regulators, they started aninvestigation on this algorithm.
Apple's response added confusion and suspicion.

(20:22):
Nobody in the company seemed to be able to describe how the algorithm worked.
And we're talking about narrow AI at this point of time, 2019 and Goldman Sachs, whichbasically issues the bank for the card.
They said there's no gender bias in the algorithm because
the gender variable was not fed to the algorithm as an input.
So how could the bank discriminate if it didn't tell the algorithm which customers werewomen and which were men?

(20:48):
That's a logical question, right?
That is a logical question, a logical answer to a layman.
But to an expert, the bias is obvious from that response.
And why so?
I mean,
When I was reading that, you I've done extensive field surveys in India.
I've organized and managed large scale surveys in multiple states in India.
What we do is we use proxy questions to get an estimate of a person's age, gender, race,political alignment.

(21:15):
We cannot ask those questions directly sometimes because the questionnaires are reallysensitive.
So what we do, for example, is we use a person's address to estimate whether that personis most likely to be living in a black neighborhood or a white neighborhood.
So if it's a white neighborhood,
they're most likely to be of white race.
Similarly, if you look at the credit card history, what kind of products is the personbuying?

(21:37):
Are there beauty products and what kind of beauty products?
So you can assess whether it's a female or a male.
So there are these proxy questions that you use and these proxies create biases in thesystem.
So leaving a crucial input, which is gender in the algorithm is, and assuming it willeliminate the bias, it's a very common and dangerous misconception.

(22:00):
Such crisis leads to loss of customer trust.
So what I explained is just a narrow AI model bias case, but now you can imagine, and Iexplained the generative AI and the agent AI, the factory, you know, the prompts that
users give and different prompts that users give, we get different.
So you can imagine the damage that if bias, generative AI models or biased agent AI isdeployed into a corporation, how much damage that can do.

(22:28):
And it would be difficult to sort of pin
where that damage is coming from.
And that's why it's important to do end-to-end responsible AI audits by experts in thevery beginning of adopting and adapting to AI.
So interesting.
As you're telling that story, I'm thinking that that would be news and welcome from theboard as well as a management team.

(22:51):
But I wonder in your experience, is that scenario of course transcend both?
But is there another story, is there more that a management team would want to know forhow to, or what your recommendations would be to operationalize that once the audit is
done, once those stories are communicated to people who are actually using the tools andif not developing the model, actually having it out in the marketplace.

(23:16):
Is there a difference between what the board needs to know and what management needs toknow to deliver on what that governance would require?
That's a really tricky question.
think what we can do, usually we come up with just one report or assessment for the wholeprocess, but for the boards and for the executives, I think they need to talk to each

(23:37):
other about what the board has to pressurize the executives on and what the executivesneed to report back on.
So the communication between the two has to be solid.
And again, this goes back to the point about steering committees, the board and theC-suite.
has to be one steering committee.
It has to be one collective mindset when it comes to responding to crises like these.

(24:03):
So the communication is the key there.
And I think the audit can also, if the corporation wants the audit, we can also add to theaudit, like what are the duties and responsibilities of both the board.
and the top management or the C-suite in terms of communication interaction.
And there should be more interaction when it comes to crises like these.

(24:23):
I myself have responded to a lot of terrorist attacks which have had impact on socialmedia platforms.
And so building that playbook, what that playbook looks like, when do you interact withthe board?
When do you, when is supposed to do what and when?
We can add that playbook to the.
I mean, it's not a part of the audit, but if the corporation wants, we can design thatplaybook for them.

(24:46):
I have designed that.
have done that in the digital counter-terrorism world, where I worked in amulti-stakeholder environment where I was working with the tech companies, the UN, the EU,
the governments, and all of those players were working in one ecosystem together.
And I had the privilege of designing that playbook for that whole ecosystem and how thoseimportant players and important places and organizations are taking decisions, what is

(25:09):
being communicated to each player.
at which point in the crisis.
And we're talking about terrorist attacks.
We're talking about shootings where people are getting killed.
have had amazing insights from working with an organization that included private andgovernmental or public players.
Can you speak a little bit to what you saw each group gravitated toward and maybe whateach learned from working with each other?

(25:33):
It's a very difficult, multi-stakeholderism is really difficult.
It's really complex because each player has its own agendas.
Each player has its own information that it keeps really close.
It doesn't want to share.
For example, tech companies, they have huge legal teams.
They don't like giving out information.
Even if we say, let's just say, for example, there's an attack and there's a say gunmanwho goes out and this happens multiple times in the U S school shootings happen.

(26:00):
See, on
lone wolf actors, we call them lone wolf actors, they go out, they gear up, you know, theyhave a camera and then they live stream their shooting.
This is what happened in the Christchurch shooting as well where the gunman was all gearedup like an army person, they had a camera and then they live streamed the shooting on

(26:20):
Facebook.
There were some, I think, some 50,000 to 50,000, I forget the number, sorry about that.
There were so many views that they just in the Ardern
and Emmanuel Macron had to create the whole Christchurch call advisory where they broughttogether the governments, the tech companies, the civil society, even the victims, the

(26:41):
families of the victims together to talk.
But the problem is the information gap, the information hesitancy, the information sharinghesitancy between these players.
That is one thing that I noticed.
And what it did for me was it gave me insights into how
the top leaderships in these different organizations are thinking, what their prioritiesare, what their motivation is coming from, and how much are they willing to sort of invest

(27:10):
in certain harmful technologies, to counter certain harmful technologies.
Agentic AI and generative AI is something that threat actors are using already.
so, but you know, how much are they aligned to fight that it's really difficult, like tobring all of those minds?
incredibly difficult.
as you're speaking, I'm just thinking that what, what a multifaceted thing communicationis coming from the, from the world of brands.

(27:36):
Communication means a number of things, but, typically what, is not part of thatconversation are, are, are, are real world on the ground, terrorist threats and the role
of social media in that and the role that the brands can play based on their use of thatmedia.
So it's a, it's a huge eye opener.
Good.
Well, thank you, Nus.
That is sort of wrapping up the end of our conversation.

(27:58):
Is there anything that we've not talked about that you'd like to get across to listeners?
I would say that there's a growing digital divide between the global north and the globalsouth when it comes to the digitization.
mean, digitization is a term that used to be used like a few years ago.
And now there's this big gap.
I mean, a few years ago, people were worried about, okay, we need to get digitized.

(28:20):
But then, know, now it's like, know, AI is on a whole another level.
And so we're seeing that the LLMs that we have right now,
They're mostly in English language.
They're Western.
have Western by default.
They have the way people think in the West, the people solve problems in the West, thevalues, the culture, the languages, most importantly.

(28:43):
Implementing those in non-Western settings is very difficult, but then we're seeing a lotof change in the global South or the non-Western world, where we're seeing a lot of
countries coming up with their own solutions.
For example, in India,
The government is subsidizing compute power and the creation of an AI model, which isproficient in the country's languages.

(29:03):
And India has 300 plus languages and each state has multiple languages.
In Africa, governments are discussing collaborating on regional compute labs.
In Brazil, president Lula, he pledged $4 billion on AI projects.
And he said, instead of waiting for AI to come from China, the US, South Korea, Japan, whynot have our own?

(29:24):
So the point that I'm trying to make is that.
there's a huge digital divide, but then in the non-Western or the global South, the waythis term is used, people are getting to speed, but it's not as much.
Very interesting.
Thank you so much.
Thank you for the time and your great insights and we'll do this again.
uh

(29:48):
If you'd like to learn more about Neusrath Farooq and Effective RAI, visitEffectiveRAI.com.
That's effective with no E before the RAI.
For more AI insights, specifically for boards and brand leaders, subscribe to the Brand AIReport newsletter at BrandAIReport.com.
I'm Mike Lequin.

(30:09):
Thanks for listening.
We'll see you next time on the Brand AI Report podcast, Boards, Bots in the Bottom Line.
Advertise With Us

Popular Podcasts

Stuff You Should Know
24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.