All Episodes

April 24, 2025 38 mins

Professor Werbach talks with Ashley Casavan, Managing Director of the AI Governance Center at the IAPP, the global association for privacy professional and related roles. Ashley shares how privacy, data protection, and AI governance are converging, and why professionals must combine technical, policy, and risk expertise. They discuss efforts to build a skills competency framework for AI roles and examine the evolving global regulatory landscape—from the EU’s AI Act to U.S. state-level initiatives. Drawing on Ashley’s experience in the Canadian government, the episode also explores broader societal challenges, including the need for public dialogue and the hidden impacts of automated decision-making.

Ashley Casovan  serves as the primary thought leader and public voice for the IAPP on AI governance. She has developed expertise in responsible AI, standards, policy, open government and data governance  in the public sector at the municipal and federal levels. As the director of data and digital for the government of Canada, Casovan previously led the development of the world’s first national government policy for responsible AI. Casovan served as the Executive Director of the Responsible AI Institute, a member of OECD’s AI Policy Observatory Network of Experts, a member of the World Economic Forum's AI Governance Alliance, an Executive Board Member of the International Centre of Expertise in Montréal on Artificial Intelligence and as a member of the IFIP/IP3 Global Industry Council within the UN.

Transcript

Ashley Casovan IAPP

IAPP AI Governance Profession Report 2025

Global AI Law and Policy Tracker

Mapping and Understanding the AI Governance Ecosystem

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Ashley, thank you so much for joining me on the road to accountable AI.
Thank you so much.
Happy to be here.
IAPP was established as an organization for privacy professionals.
So how did it get involved into AI governance issues?
Yeah, great question.
So you're completely right.

(00:21):
We're actually celebrating our 25th anniversary this year.
So it's been a minute and
Part of, I think, the recognition was that, as you know, we do an annual survey, but wealso are kind of in constant dialogue with our membership, and a lot of questions were
surfacing.
I wasn't part of the organization at the time, but Trevor, our CEO, has kind of recountedthis to me, just how's the organization gonna move forward to support privacy

(00:52):
professionals who are now being tasked to do AI governance work?
And so...
our annual report.
in 2022, 2023 started to see, which is based on these surveys, started to see realconcerns from privacy professionals that are then being tasked with AI governance to say,

(01:12):
what are some of these best practices?
What are the intersections between rules for privacy?
And are they applicable then to applying in an AI context?
And so it became a real opportunity that it seemed like there was a gap in the market.
that a professional association like ours could help to really fill that niche where we'refocused on training and certification and providing a community for privacy professionals

(01:44):
to really do the same for this emerging professional unit of AI governance people, whichis really still to be defined.
And I'm sure we'll get into that a little bit more.
Yeah, how close is the overlap and alignment between people who do privacy or dataprotection and the people who do AI governance?

(02:08):
it is and it isn't.
So well, maybe I'll take a step back or say it in a slightly different way.
There's overlap, but it's more of a Venn diagram than it is a complete set of the exactsame skills.
And so what I mean by that is that part of the risks that come from the use of AI are

(02:37):
the same issues that we're dealing with in privacy.
Ensuring that data that's being used in these systems is protecting people's privacy.
Really understanding how data is flowing and moving within the technology that it's beingused by.
Same issues that you want to track, independent of whether that's in an AI system or adifferent type of technology.

(02:59):
But there are other risks or other objectives that an AI governance professional needs tobe aware of that are not something that necessarily privacy professionals in the past have
dealt with.
So thinking about whether or not this system is ethical or responsible and what does thatactually mean?

(03:21):
So dealing with human rights issues and also,
even though we definitely have provided some courses in training in privacy enhancingtechnologies in the past or having more of that technical focus, we really see that in the
AI governance community, there's more part of another side of a Venn diagram is more of anoverlap with the tech community being developers and deployers of AI systems, but even

(03:50):
those that are working more on the technical data front.
And so that's something that I
think are emerging as important and different skills that kind of a broader view of whatAI governance is, but definitely privacy is an important aspect of that.

(04:11):
As you say, it's still developing and varied, but what would you say the community of AIgovernance professionals looks like and what are the kinds of skills that it takes to be
effective in those roles?
I'm really, really happy you asked this because we just this week have been workingthrough

(04:35):
something I've started since I've been in this position for just over a year now.
And that's been one of those big questions, which is what exactly is an AI governanceprofessional and what are the skills that they need?
And I've been slowly kind of chipping away at this and this week finally, because we'reworking on publishing an AI governance professional textbook that's a support to our

(05:00):
training and certification programs.
along with one of my colleagues, we're working on a chapter that's entitled currently, butwhat is an AI governance professional?
And so I've been putting for that a skills competency framework together to really try anddefine what that is.

(05:21):
And so similar to what I was just mentioning in the previous question that you had, I'mreally looking at skills that are coming from the technical background, from product
management.
So because we have regulation now,
the form of the EU AI Act that's based on product safety legislation, having somebody thathas knowledge of how a product works and how you've managed that in the past is really,

(05:45):
really useful.
Basic governance skills.
So again, one of the things that I should have mentioned that privacy professionals havebeen tasked with in the past have been being involved in governance committees and
navigating between multidisciplinary teams.
And so as a result, they've developed really great soft skills to be able to kind of speakthe same language and really translate some of the key issues.

(06:15):
And so that's really helpful in a governance context for AI as well.
Policy is super important.
So making sure that, as I said, because we do have some rules in place now, that you'resetting up your organization to be compliant with that, but also making
sure that where there's gaps or you're using AI in your context, you're applying andadapting those rules.

(06:40):
I mentioned ethics before.
And then really, think that risk, legal compliance, and assurance is a whole othercategory.
So just...
How are you working at establishing appropriate controls within your organization tomitigate the risk that would come from deploying or using these systems?

(07:04):
And then does that involve assurance practices, whether that be internal to yourorganization or using a third party through an external auditor, as an example?
Yeah, it's a big portfolio.
presumably we don't have enough people now that are able to provide that that kind ofexpertise in company.

(07:25):
So I know IPP is doing a lot with the certification and so forth.
But what needs to happen to broadly both in terms of what your organization is doing andbeyond to have enough people to fill those roles.
Yeah, I think actually maybe that's one of the reasons that it's taken so long to thinkabout what is an AI governance professional.

(07:46):
Maybe it's not even the right question because it should be really what is an AIgovernance team recognizing that you need people with these different backgrounds and
perspectives to make up your AI governance team within an organization.

(08:07):
unlike other kind of more discrete domains in the past, what we're seeing is somebody withan AI governance title is really coordinating amongst those different roles or those
different subject matter experts.
And so I think that, yeah, maybe I should rethink the name of my chapter, but it's, withinour efforts are trying to,

(08:33):
by recognizing that it is kind of a multidisciplinary profession, the different types ofrequests that are coming in.
It's not one person that can do all of this.
But like I was saying before, with some of those soft skills, you have to have enoughknowledge and enough of the nomenclature to really be able to...

(08:59):
talk to each of those teams, do some of that translation, navigate policies, have a radarto be on top of everything that's happening because it's happening and changing so, fast
in this space.
So I think that that combination of hard skills with knowledge and knowing how to developappropriate policies, but then also some of those soft skills are what we're thinking of

(09:26):
and that's really what the root of our
our training is.
And you've talked about a number of things that you're doing, but you said you've been inthis role about a year.
What else are you working on in terms of AI governance for IEP?
Yeah, so what we established with the AI Governance Center was really to repeat a lot ofthe great core work that IAPP had built for the privacy community.

(09:56):
So that includes, we've talked a little bit about training.
Clearly, that's a core component of what we do.
But to complement our training, we have a certification program.
that at least what we've seen in privacy is that credential is something thatorganizations look for when hiring people because it's a demonstration of competency and

(10:17):
capacity and or capabilities for that subject matter.
And we're starting to see a little bit of that now with AIGP.
So I think that that's something that's really significant.
And also like any good professional association, we're really
building a very strong community.

(10:38):
I think in this sense, we have knowledge nets that are in individual cities and puttogether by volunteers of the organization and they're, they could be monthly meetings or
once a year, but they're on topics that people really care about.
And then on the other side of the engagement spectrum,

(11:01):
We have two conferences annually that are dedicated to AI governance.
AI Governance Global is what they're called.
We have one in Europe and Dublin this year in May and then one in September in Boston.
at those, not only are we really focused on implementation, sharing best practices forimplementation.
So it's not just here's the latest and greatest in what's happening with AI technologiesbecause there's a plethora of other venues for that.

(11:28):
But this is really trying to help practitioners in that day-to-day task of, okay, this haslanded on my desk.
How do I deal with this?
How do I translate the work that somebody else has done and think about it in my contextbecause these best practices for each of those domains aren't developed yet.

(11:49):
So that and then finally, I think that something that's relatively new to IAPP
like 10 years, maybe not 25, or a comparison to the duration of the organization, isreally our robust research and insights team, as well as the publications work that

(12:11):
happens.
And so we have contributors, which are really excellent in our own kind of news teamthat's following and tracking everything.
And so recently, as an example, we had a five-part AI literacy series.
And so I think that...
Because we're so focused on implementation, it's a really great place to come and get bestpractices and what our research team does not only on these annual reports, but some of

(12:39):
our other reporting will be things like a top 10 operational impact series on the EU AIAct, just as an example.
So people can really take that and adopt it within their organization.
How much support is there out there in industry for really engaging with responsible AI,accountable AI as I talk about it, going beyond just the minimal requirements to check a

(13:05):
box that you've complied with legal requirements?
Yeah, mean, we'll have our report coming out soon, actually.
So this is, again, a timely question.
But in it this year, we found that 58 % of organizations are saying that they're doing AIgovernance.

(13:28):
35 % or 39 % said that they had an AI governance committee.
Obviously, that doesn't tell us that.
they're checking a box, like they're just doing it for the fun of it or not, but I don'tthink you stand up things like a governance committee and you go to this effort of putting

(13:51):
resources behind it if you're not being serious about it.
Some of the work that the report digs into and because we did some interviews after withsome of the respondents was looking at
how they've established that and who's involved and what we've been interested in is howthat even works within the organization from a reporting structure perspective.

(14:16):
So how seriously are these companies taking it from that perspective?
And also we were looking at it from the perspective of does it have, is it just there toadvise?
Is it there for decision-making purposes?
And so I think those are all things, factors that we're seeing that
are an indication that organizations are taking it seriously, but still 48 % or 58%,sorry, isn't everybody.

(14:47):
So I think that there's still a lot of room for improvement in this space.
And even one of the things that we've heard when we've had some of these convenings is,again, some people are just kind of.
doing this off the side of their desk so it's not even a formal program.
So it's not to say that organizations aren't thinking about it, it's just not formalized.

(15:09):
But then others really feel like they're severely behind because they're not aware of thestate of maturity within other organizations.
And then when they reflect with other people, they realize, okay, we're all sort of inlearning mode right now, in adapting mode.

(15:30):
And so I think that we'll continue to see as we do these surveys year over year that notonly that number increase, but more of an indication to your actual question, which is how
seriously are they taking it?
What are they doing?
What does an AI governance program look like to even qualify as an AI governance program?

(15:50):
Do you have a sense of what the biggest pain points are, the biggest stumbling blocks thatorganizations face that want to build out these kind of programs?
Yeah, I mean, one of the things that we found really interesting was the amount ofcommitment or resources really varied between the size of the organization.

(16:11):
So unfortunately, as you can imagine, smaller organizations don't have the capacity in thesame way that larger organizations do.
So that's
on its own, just from a structure perspective, because in some circumstances these are netnew teams or just even net new responsibilities that these teams are kind of already
overstretched and don't have that capacity.

(16:35):
So that's one.
Another one is that it's really complex.
it is one of the pieces of research that we've done recently is just mapping all of theintersects.
intersecting pieces of legislation within the EUAI Act.
And there's more than 60 of them.

(16:56):
And so for there to be, again, back to this isn't like just a one person task, it's not,unless you find this like really special unicorn, it is really meant to be that having
that interdisciplinary team.
And that might even mean having

(17:17):
an interdisciplinary set of lawyers that focus on things like product liability, dataprotection, security, et cetera, because those are all different subject matter expertise
that you need to be thinking about as it relates to governing your AI.

(17:38):
As you mentioned, there is a European Union AI Act.
There's not a United States Federal AI Act, but there's lots of state legislation pendingand being adopted.
I know your organization doesn't take policy positions, but what is your sense of thestate that we're at in terms of developing the structures of AI law and regulation around?

(18:01):
Yeah, well we do have a state law tracker for AI legislation in the US and so we have beenfollowing this and there's certainly been a proliferation of AI legislation that's being
proposed.
It's not all passing, but it has provoked really, really interesting debates within statelegislators because

(18:27):
It's a relevant topic to people.
I think states are an interesting place where you're still very close to people.
They're concerned and don't necessarily trust these systems, but also are responsible forsome of the most critical pieces of infrastructure within a society like health, like
education, where these AI systems might be being used.

(18:51):
And so as a result, we're seeing at the state level,
more of a focus on sector specific or domain specific legislation.
And so I think that we're going to see more of that.
What's also been really, really interesting is how similar some of the drafting of thesestate legislations are.

(19:15):
And I think that you can see that state legislators are working together to really
come up with best shared best practices, which I think is even if it's not the samelanguage, but it's a similar intent, it's going to make it easier for organizations to
develop a cohesive set of compliance activities within their organization that willdemonstrate that will allow them to demonstrate compliance in many states, as opposed to

(19:44):
having to have a patchwork of
compliance tactics that are sufficing in one state and another state and another, whichmakes it incredibly, incredibly confusing and difficult.
Yeah, this is always the debate about whether to have states be these laboratories ofdemocracy and experimenters or the value in having a uniform federal system.

(20:09):
And obviously there's an interplay that the more state legislation there is, the morepressure there is for Congress to preempt.
Do you have any sense about how that's going to play out going forward?
I mean, I think we're still haven't seen federal legislation on privacy.
So I would guess in a similar way that the more that we see legislation that's happeningat the state level, especially if there are common objectives across that, then there that

(20:39):
might be a way to prompt federal legislation.
That said, I don't think that it's.
it's not a priority that we've seen.
I don't anticipate it anytime soon.
But I think it is important to note that the EU AI Act is a transnational piece oflegislation.
so even though companies might be US based companies, they're still taking note of whatthose requirements are.

(21:12):
either because they are serving Europeans independent of where they're physically locatedor because they're operating in Europe.
And so I think that this is something that is setting or establishing a bar of baselinerequirements.
Yeah, we have a model with privacy where Europe adopted GDPR.

(21:35):
We don't have a US equivalent and most companies, at least in my experience,multinationals will just comply with GDPR as they're building those structures.
Do you anticipate that the pattern will look somewhat similar?
Obviously, one difference is now that the Trump administration in the US is taking a muchmore aggressive deregulatory line.
So what do you anticipate we'll see between the US and Europe going

(22:00):
Bye!
You know what, even in advance of this administration, I've been very skeptical that it'sgoing to take maybe as strong, if you put the Brussels effect on a spectrum, as strong of
a Brussels effect as GDPR did.
And that's because the scope of the AI Act is so much bigger.
And also you have so much secondary legislation that's going to complement the rulesaround it.

(22:25):
And so I think that, again, the overall objectives
might be similar, but the implementation of them based on the domain in which thetechnology is being used, the region, the technology type, I think we're gonna start to
see a lot more nuance with that.

(22:46):
And I think where the driving force around that is actually gonna be around standards.
And I do think that as a result of the EU AI Act, that's a forcing function for standardsto be created.
that are speaking to some of the requirements within the act itself.
But I think that those are going to be built in concert with other nations, including theUS.

(23:08):
And because the US is such a significant market from an AI perspective, from a user base,then I think that some of those perspectives will be integrated into that, or not some of
those.
It'll have a strong voice in the standards development processes.
And so,
Yeah, even in absence of legislation, there's certainly going to be US stakeholders at thetable.

(23:37):
And standards are obviously very important in this area, but there are a plethora ofdifferent standards efforts at many levels.
So how do we get to some clarity for organizations in terms of what the standards are thatare actually meaningful?
Yeah, mean, standards is a very complex topic, you're right.

(23:59):
And so I think maybe the first thing to do is recognize that there are standards that arebeing developed for how the organization is set up.
Is it set up in a way that is conducive to even having AI operate in a safe andresponsible manner?
And so a good example that's been shared with me by standards development organizations

(24:23):
is that's kind of your warehouse, is your warehouse in order, but your products andtesting and evaluating each of those for, again, the different objectives that you might
have, those are a different set of standards.
And then you have what we're working on are standards for people, the people that areoperating those systems.

(24:44):
In our circumstance, we're actually even only looking at a subset of that, which is thegovernance component.
I think that we're gonna start to see standards of people that come out of engineeringschools maybe.
And so having certain types of qualifications for what does a good AI engineer look likeor a data provider that's providing data in the value chain to an AI system.

(25:10):
So I think thinking about standards at least from the perspective of organization.
product and people is really important.
And then within that, where the standards efforts have been most advanced to date are onthe management system front.
So you have the equivalent to what we see in security world, which is 27,001.

(25:35):
We have 42,001 for AI systems or AI management systems.
But again, that's like, does your organization have a policy in place?
Do you have governance, et cetera, et cetera?
So your warehouse.
And then we start to see adaptations of things like software as medical devices forproducts and that type of safety.

(25:56):
Even what I find really interesting is when you think about an AI system as your databeing input, your model, and then the output, even then you're having standards at those
different levels.
And so we've seen a lot of discussion related to climate standards lately for
data warehouses and data processing.

(26:16):
And so I think there's a lot of work that's being done that will come together into that.
And then obviously we're working on people's standards.
And so there's lots of work underway in that field as well.
maybe this relates to the question I asked before, isn't there a danger that as all thisbecomes so standardized and so formalized that we limit the ability of people to be

(26:41):
creative, both from an innovation standpoint, but also from being flexible, just given hownovel these technologies are.
And as you say, there's so many different aspects that AI is not just one particular kindof product that a company provides.
Yeah, I think that standards as much as we think of them as they can vary to just a basicchecklist of have you followed good practices?

(27:07):
Did you do your due diligence?
Basically?
Did you inform or notify people that you're talking to a machine?
These are kind of how you go about doing that.
There's a huge variance in terms of what would be accepted.
And so
that could be a standard.
And then there's, again, why product standards or product safety is so important in thisconversation is we can learn from the past of like, we need standards to be pretty similar

(27:38):
so that we have a common output when we're producing a phone or we as a consumer want tohave trust that this phone is safe and is going to operate in the way that's intended and
expected or my electricity is going to turn on.
and not shock me every single time.
And so a lot of what I think is important and whether we're talking about this in terms ofguardrails that are coming from standards or legislation is that the types of rules that I

(28:10):
think companies are looking for right now are things like, does your car have brakes?
Because that will allow us to go faster and to innovate and not every car looks the same.
but the brake system kind of is functioning in a similar way.
And there are standards around that.
And so these are the different types of things where the guardrails that we put in place,again, how wide and how narrow those are might vary.

(28:40):
And the market will help to determine what that variance is.
But they will allow us to innovate in a way that is also, I know we,
I have talked about this in the past of protecting people, but especially for people thatare kind of concerned about like innovate and optimize, I think it's the same outcome that

(29:04):
you want to have is that you want this to actually be a technology that's adopted and noone's going to adopt it if they don't trust it.
Yeah, this is a big question though.
What are the characteristics of governance practices, standards and regulation, frankly,that promote trust and facilitate innovation versus the ones that just get in the way and

(29:29):
slow things down?
Do have any sense about how we draw those dividing lines?
Well, I think it's a good, it's a balance to be had, but this is again why, where I thinka lot of criticism against the EUAI Act is, and this is just a personal opinion, I've been
a huge proponent about having rules for this exact reason.

(29:51):
Do I think that a comprehensive piece of legislation is a disservice?
No, but I do think that, but I understand where the criticism and critique comes from.
So I'm from Canada.
We had Bill C-27, was legislation that was tabled here, and that was a lot of the debateof, we follow this direction of the EU-AI Act and be comprehensive, or do we create a

(30:16):
framework or something for domain-specific legislation for AI?
Ultimately, the clock ran out.
We will have a new government with likely an election coming up soon.
So it's to be determined what will happen here.
But I think this is why we haven't seen more legislation in other countries yet is becausethat exact formula of do you have comprehensive legislation that puts these foundational

(30:45):
rules out, is that actually the best approach or do we take a step back and say, okay, weneed to look at this from the context of how that system is being used?
Sorry.
No problem.
So sorry, I'm gonna take this a moment.

(31:06):
fine, again, it's why we have editors.
Thank you.
Sorry.
Yeah.
I knew it.
I was like, just make it through.
Okay, good.
Now, yeah, I'm just gonna do a couple more.
Yeah, all good.
All good.
I'm gonna take one more simple one.

(31:28):
Thank you.
Spring fun.
and you will get so.
So what are you most worried about in terms of the areas that you focus on?
Yeah, I mean, to me, it's these small cuts of AI.

(31:50):
And I guess what I mean by that is, is there a world in which there's this crazyTerminator scenario, I hope not.
I hope that there's some sensible implementation.
of what we were just talking about and whether those are self-imposed guardrails fromcompanies or whether those are standards or whether those are rules from government.

(32:20):
I think we're gonna as a society figure it out.
Like nobody wants this doomsday scenario to happen.
But I think that, well, I sort of have two maybe.
So.
On one hand, I do think that while not a doomsday scenario, I have personally been verysurprised with how fast the technology has changed just in the last year, the last six

(32:52):
months.
I think that I was, I worked in the Canadian federal government before building outpolicies for government's use of AI systems.
And this was in 2018, 2019, we were having conversations around, okay, well, what's gonnabe worse, like robots or climate change?

(33:13):
And so in that sense, you kind of get an understanding of where our heads are at in termsof the time horizon of this happening, of potentially artificial general intelligence.
And I...
It's not for us to debate right now what that line is of AGI, but I do think that havingAI systems that personify humans a lot more is getting us much closer to it.

(33:42):
And I've just been very, very surprised with how quickly this technology is changing.
And so I guess what scares me in that scenario is that we...
we don't have the attention to be able to put some of those rules in place.
Sooner than later, I was at the Paris AI Action Summit and it was just striking to me thatthere were kind of these two different conversations going on of we need to innovate and

(34:11):
move faster and that's really great.
But then also there's the reality of these companies that are building these things thatare incredibly powerful.
and us not having space as a society, which sometimes we do through government, sometimeswe do through industry councils, to be able to say, this is what acceptable to us looks

(34:34):
like.
So that's kind of what scares me is just there's so much going on.
There's been so many elections last year all over the world.
Again, we're kind of continuing with this in Canada.
And so with all of that change, there just hasn't...
I don't think been that space for it and where you would think you would have thoseforums, the action summit.

(34:56):
That's kind of what surprised me from that is we didn't have that conversation.
So that's one bucket.
The second one is the small cuts and the second and this, the small cuts.
What I mean by that are AI is happening.
Like we're not talking about this as a future scenario.
It is the classical version of AI, these prediction systems.

(35:18):
automated decision-making systems are in our life.
And they, and generative AI is in our life.
And they're not necessarily big sweeping decisions that are going to all of a sudden, mylife is horrible tomorrow, that I think is going to be a big issue.

(35:40):
I think it's those.
small changes or small developments time over time that will really end up leading to aplace where some people will be at an advantage because of how these AI systems work based
on historical data, will predict things that will be positive for one demographic, butthey might be negative for another demographic.

(36:03):
And I think that we always think, or we've talked about this fairness in terms of itimpacting certain minority communities, et cetera.
I think the answer is we just don't know yet because if we're not required to track that,we're not required to track how, what the outcome would have been if a human had made it
versus this machine had made it and kind of take some of that, those forensic points intime and then determine what's acceptable.

(36:36):
then we're just, we're not gonna know.
And so I think that's ultimately going to start to reshape.
a small example of this is I've always been concerned just as a kind of point of interestabout Spotify playlists and I've, and how I'm like, I don't choose my music so much

(36:56):
anymore.
I used to really take a lot of time and love it and...
And then I recently was listening to a researcher that was talking about how that isbecoming an issue.
And so again, whether my music listening habits are impacted or not, not a big deal.

(37:17):
But if that's impacting your financial capabilities and your health decisions, again, evenif they're small,
the small things add up to a big issue.
absolutely.
No, so many challenges and things moving so fast.
It's all the more important, the work that you and others are doing to build up theexpertise of the practice of AI governance.

(37:42):
So Ashley, thank you so much.
It's really been a fascinating conversation.
I appreciate your joining me.
Thanks so much.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.