All Episodes

November 9, 2024 18 mins

In this powerful episode of the AiSultana Podcast, we dive deep into the recent U.S. Senate hearing on AI oversight. Featuring testimony from former insiders at leading AI companies, this episode uncovers pressing issues that demand attention, including the alarming timeline to AGI, profit-driven motives that undercut safety, and the global stakes of AI regulation. Hear exclusive insights on industry risks, inadequate internal safeguards, and the urgent need for robust government action to protect public interests. Key witnesses, like former OpenAI board member Helen Toner and ex-Meta engineer David Evan Harris, underscore the necessity for transparency, legal safeguards, and third-party accountability. Whether you’re an AI enthusiast or concerned citizen, this episode offers a rare glimpse into the conversations shaping the future of AI governance.

Brought to you by AiSultana, a consultancy specializing in AI solutions for industry.

Join us daily for concise updates on crucial developments in AI, and why they matter to you.

Available via YouTube, Apple, and Spotify.

Don't forget to like and subscribe, and explore our free wine consumer app at www.aisultana.com.

Tune in to stay informed about the pivotal topics shaping the future of AI in industry.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome back to the deep dive.

(00:01):
Looks like we're diving into AI today,
a whole stack of excerpts here from the Aesel Tana podcast.
And you know what really stood out to me?
What's that?
All the stuff about the potential risks,
especially what people on the inside are saying.
Yeah.
The ones actually working on this stuff.
Yeah, it definitely hits different
when it's coming from people

(00:22):
who've been in the trenches, right?
Not just theoretical.
We've got folks here who worked at OpenAI,
Meta, those kinds of places.
Exactly.
So we've got some juicy bits from a Senate hearing, right?
Former AI developers, spilling the tea.
And then some follow-up chats from the podcast,
kind of unpacking it all.
Where do we even begin?
Well, there's this one story, more like a metaphor, I guess,

(00:45):
that really captures the whole issue.
They called it the metaphor of the bear.
Oh, okay.
I'm all ears.
Hit me with this bear metaphor.
So picture this.
You're in a forest, right?
Suddenly a bear shows up.
Everyone starts running like crazy.
Makes sense.
Don't want to be bear food.
The thing is, the only thing that matters
is not being the slowest.
Slowest one gets caught.
Okay, I'm starting to see where you're going with this.
Kind of a race to the bottom.

(01:06):
Exactly.
That's how some developers describe
AI development right now.
It's not about being the best.
It's about not being the worst.
Not being the one who makes the biggest mistake.
So not building the safest AI,
just making sure you're not the one
unleashing the digital bear on the world.
That's kind of scary.

(01:26):
Is that really how it works out there?
Or is that an exaggeration?
I think it's closer to the truth
than we'd like to admit.
The pressure is immense.
Everyone wants to be first.
First to market, next big breakthrough.
And sometimes safety takes a backseat.
And that's what these whistleblowers
are trying to tell us, right?
This rush, this race.
It's creating a blind spot.

(01:47):
Absolutely.
And they've got concrete examples.
One guy, William Saunders, used to work at OpenAI,
said he could have just walked out
with their most advanced AI system.
Just taken it.
Wait, seriously?
Just like pocketed it and walked away.
That's insane.
Security was that lax.
Makes you wonder what else they might be missing, right?
No kidding.
So how real is this threat then?

(02:08):
How soon could this bear actually catch up to us?
Any guesses on the timeline for AGI,
that artificial general intelligence thing?
That's the million dollar question.
And there's no easy answer.
Some of these experts, they're saying AGI,
it could be just a few years away.
One to three even.
Others think it's further out.
Maybe a decade or two.
OK, so we're looking at a pretty wide range of possibilities.

(02:30):
But even if it's on the longer end,
the potential consequences are huge, right?
We've got to start thinking about this now.
You got it.
We're talking cyber attacks, even
bioweapons powered by AGI.
This isn't science fiction anymore.
It's something we need to be ready for.
Makes you wonder about the motives of the companies
building this stuff.
But let's talk about the whistleblowers themselves
for a minute.
They're really sticking their necks out here.

(02:52):
What kind of challenges do they face speaking up?
It's not an easy road, that's for sure.
A lot of pressure to keep quiet, even
if they see something wrong.
Many of them sign these non-dispairagement agreements,
you know, part of their contracts.
So even if they see something that
could be a danger to the public, they can't blow anything.
Pretty much.
It's really tough for them to speak freely.

(03:13):
And on top of that, there aren't a lot
of legal protections for whistleblowers in AI
specifically.
You'd think someone warning about dangerous AI
would be protected, right?
I mean, that seems like common sense.
You'd think so, but the law hasn't caught up
to how fast this technology is moving.
A lot of the potential harm these folks are talking about,

(03:34):
it's not even technically illegal yet.
It's this gray area.
They're risking their careers to speak out
about something that might not even be against the rules.
Wow.
That's a tough position to be in.
I bet there are way more people seeing this stuff
but are too afraid to say anything.
Probably.
And then there's this whole thing about regulating AI,
people saying it'll slow us down, especially compared

(03:55):
to China.
What do you think about that?
Yeah, that's the big argument, right?
We can't fall behind.
But one of the experts, Helen Toner,
she brought up an interesting point.
China, they're already regulating their own AI sector.
Pretty actively, actually.
So it's not this free for all everyone thinks it is.
Interesting.
And Toner, she was saying that good regulation,

(04:15):
smart regulation, it can actually boost innovation.
It builds trust.
People more likely to use something
if they feel like it's safe and reliable.
Yeah, that makes sense.
It's like you don't want to build a house
on a shaky foundation.
Exactly.
And it goes back to that bear metaphor, right?
Everyone's so focused on outrunning each other,
who's checking the ground beneath their feet.

(04:36):
Good point.
Someone needs to make sure we're not building this whole AI
thing on shaky ground.
Exactly.
So we need those regulations, right,
to make sure we're building on solid ground.
Definitely.
Ethical and safe.
That's the foundation we need.
OK, now, open source AI.
That's a whole other can of worms, right?
No.
People say it's a way to make AI more democratic,
but it's not that simple, is it?

(04:58):
Not at all.
The thing about open source, once those weights are out
there, they're out there.
No controlling who uses them or how.
Like a recipe.
Once it's shared, anyone can make the dish,
even if they have no idea what they're doing.
Exactly.
Good intentions don't guarantee responsible use.
And then there's jailbreaking.
Jailbreaking.

(05:18):
Yeah, I've heard that term thrown around.
What's that all about?
Basically, people finding ways to bypass the safety
restrictions built into the AI.
Oh, like picking a lock on a door that
was supposed to be secure.
Exactly.
People will push the limits, especially
with tech this powerful.
You build in safeguards, but there's
no guarantee they'll hold.

(05:39):
It's like you're trying to contain something incredibly
powerful, but the container might have holes.
That's a good way to put it.
So open source, it's got his pros and cons.
It can make AI more accessible.
But it also means potentially putting powerful tools
in the wrong hands.
Right, that's the dilemma.
And then there's the whole ethical question,
like who's responsible when something goes wrong.
Yeah, it gets complicated fast.

(06:00):
Absolutely.
We're still figuring out how to use this powerful tool
responsibly.
OK, data privacy.
That's a big one.
Companies like Meta, using our data to train their AI,
especially here in the US, where we've got weaker protections
than, say, the EU.
What were people saying about that?
A lot of concern about transparency or the lack of it.

(06:22):
And control.
Do we really have a say in how our data is used?
There was this example from the UK,
really highlights the issue.
Meta announced they were going to use public Facebook
and Instagram data, right?
Yeah, to do what?
Train their AI to better reflect British culture.
It caused a huge uproar.
I can imagine.
OK, we know our data is being used,

(06:42):
but tied to our nationality.
That feels different.
People felt like their privacy was violated.
Big time.
And they weren't happy about it, especially
without their consent.
It raises the question, do we have any control
over our digital footprint?
And how comfortable are we with this stuff being used to train
AI?
Yeah, it's a lot to think about.
Honestly, listening to all of this,
it can feel a little overwhelming.

(07:03):
I hear you.
It's a lot to process.
The truth is, we're all still learning about this stuff,
what it can do, what the risks are.
But that's why these conversations are
so important, right?
Absolutely.
The more we talk about it, the better prepared we'll be.
Exactly.
And luckily, we've got a whole lot more to unpack here.
OK, so we've got the bare metaphor,
met some interesting folks who are putting it all on the line,

(07:24):
and scratched the surface of data privacy and regulation.
Where do we go from here?
Well, next up, let's dig into some potential solutions.
It's not all doom and gloom.
There are people working hard to figure out
how to do this right.
All right, I'm ready for some good news.
Lead the way.
So after that Senate hearing, the Asel Tana podcast host,
they had Helen Toner on as a guest.

(07:46):
Oh, yeah, she's like the expert on AI policy, right?
Pretty much.
And she had some really interesting thoughts
on how we move forward.
One thing she kept coming back to was transparency,
in AI development, I mean.
Yeah, makes sense.
Like shine a light on what's going on behind the scenes.
Yeah.
Sunlight is the best disinfectant.
That's the idea.
Lift the curtain.
Make companies be more open about their safety practices,

(08:08):
the testing they're doing, what data they're using
to train their AI, all of it.
Kind of like those making of documentaries for movies.
You see all the work that goes into it.
Except, obviously, the stakes are a bit higher with AI
than with a movie, right?
Yeah, no kidding.
And transparency, it builds trust too.
If people understand how AI is being made, what's being done
to keep it safe, they're less likely to be afraid of it.

(08:31):
OK, so open those studio doors.
Let's see what's happening on the AI set.
But transparency is just one piece of the puzzle, right?
What else did they talk about?
Well, another idea that's gaining traction
is independent oversight.
Like separate organizations not connected
to the companies who can review and audit AI systems
before they're released.
So kind of like a safety inspection for a car

(08:53):
before it hits the road.
Make sure it's roadworthy.
Exactly.
And just like car safety, you need standards, right?
Tests to make sure these AI systems are meeting
ethical and safety benchmarks.
Interesting.
What about licensing AI developers?
Have you heard anything about that?
Oh, yeah, that's another one that's being tossed around.
Setting qualifications.

(09:13):
Standards for who can actually build these advanced systems.
Sort of like doctors or lawyers needing a license.
So not just anyone could whip up some powerful AI
in their basement.
Exactly.
You need to show you've got the expertise,
understand the risks.
It could help professionalize the field,
build a culture of responsibility.
It all sounds great in theory, but are policymakers actually

(09:35):
taking any of this seriously?
Or is it just talk?
Well, there are some signs that things are moving.
I mean, the Senate even having that hearing
is a good sign, right?
It means lawmakers are starting to pay attention.
True.
Anything concrete happening?
The EU, they're actually further along.
They've got this AI act that's supposed to be finalized soon.
And it lays out some pretty strict rules, especially

(09:55):
for AI systems that are considered high risk.
So they're actually doing something about it.
Not just talking.
What about the US?
Any movement on our end?
A little slower here, but there are a few bills
being considered focused on AI safety and ethics.
And the White House even put out a blueprint
for an AI Bill of Rights, which is

(10:17):
like a set of guiding principles.
OK.
So at least there's some momentum building.
It feels like we're at a turning point here.
And maybe this is me being cynical.
You always hear that too much regulation is
going to stifle innovation, right?
How do we balance protecting people
with letting technology advance?
That is the question, isn't it?

(10:37):
It's a debate that's been going on forever.
But I think Helen Toner made a good point.
Sometimes a little bit of constraint
can actually lead to more creativity.
It's like, if you give people a clear set of rules,
it can focus their thinking, lead to better solutions.
I like that.
Constraints breed creativity.
So maybe it's not about stopping AI.
It's about giving it the right direction, the right guardrail.

(10:58):
Exactly.
Steer it in a safe and beneficial way.
That's what those whistleblowers are pushing for, right?
Not to shut it all down, but to do it responsibly,
with safety and ethics at the forefront.
OK, so we've talked about solutions.
Sounds like things are moving, even if it's slow.
But let's zoom out for a second.
Why should the average person even care about all of this?

(11:19):
How does this impact everyday life?
That's a really important question.
And the answer is, AI is already impacting our lives
in so many ways, even if we don't realize it.
What we see online, the things we buy, even health care
decisions, AI is playing a role.
It's like this invisible force shaping our world.
And it's only going to get more powerful, right?

(11:40):
Absolutely.
And that's why we need to be informed.
We need to be part of this conversation.
Because the choices we make now about how we develop and use AI,
they're going to have a huge impact on all of us.
So it's not just a tech issue.
It's a societal issue.
It's about all of us.
Exactly.
AI isn't something happening in some lab somewhere,
separate from our lives.
It's becoming a part of our everyday reality.

(12:00):
And it's going to affect how we live, how we work,
how we interact with the world.
It makes those whistleblower stories even more powerful,
doesn't it?
They're not just talking about some hypothetical thing.
They're talking about something that could affect all of us.
Exactly.
They're giving us a glimpse behind the curtain showing us
what can go wrong.
And they're urging us to be more thoughtful about how

(12:21):
we approach this technology.
And it's not just up to the experts to figure it out,
right?
We all have a role to play.
Absolutely.
The future of AI, it's not set in stone.
It's something we can shape through our choices
and our actions.
OK, now I'm feeling a bit more optimistic.
There's something we can actually do.
But where do we even start?
What can we actually do to make a difference?

(12:44):
Well, first things first, we need to educate ourselves.
The more we understand about AI, the better equipped
we'll be to make smart decisions about how it's developed,
how it's used.
So read articles, listen to podcasts,
have these kinds of conversations.
Exactly.
And ask questions.
Don't be afraid to ask, even if they seem basic.
The AI world is moving so fast, everyone's learning as we go.

(13:07):
And it's not just about the technical stuff.
It's about the ethical side, too.
What are the implications for society?
Because AI is just a tool, right?
It can be used for good or bad.
It's up to us to decide how we want to use it.
Exactly.
And we can also be more conscious consumers.
Think about the companies you support.
Are they thinking about AI safety, about ethics?

(13:27):
And be aware of how you're using AI in your own life.
Like, are we OK with how social media uses our data?
Or facial recognition in public spaces?
Right.
Those are the kinds of questions we
need to be asking ourselves.
It's about being critical, about questioning the tech that's
all around us.
And letting our voices be heard.
Contact our elected officials.
Support organizations that are fighting for responsible AI.

(13:49):
Talk to our friends and family about this stuff.
Exactly.
The more people who are aware of the problem,
the more pressure there is to find solutions.
It's a long process, but it starts with individual voices
speaking up.
OK.
I'm starting to see the bigger picture here.
We've got the whistleblowers.
We've got policymakers trying to catch up.
And then there's us, everyday people with more power
than we realize.

(14:10):
And don't forget about staying curious.
AI is changing so fast.
Even the experts are constantly learning new things.
Read those articles.
Listen to those podcasts.
It helps us stay informed and make better decisions.
Speaking of podcasts, any memorable stories
from Aysul Tanna about individuals
making a difference?
Oh, absolutely.
There's one about this group of high school students.

(14:32):
High schoolers.
Wow.
They built an AI-powered app that helps people identify bias
in online content and report it.
That's amazing.
They're already out there changing things.
It just shows you, you don't have
to be an expert to make an impact.
They saw a problem.
They learned about AI.
And they used it to do something good.
Gives me hope, honestly.
Me too.

(14:52):
It's a good reminder that even with all the uncertainty,
there are people out there who are passionate about using
AI for good.
And they're finding creative ways to do it.
OK, so lots of ideas on the table.
Transparency, oversight, even licensing for AI developers.
But where does that leave us?
The everyday folks.
Just using technology, not building it.

(15:14):
Right.
It can feel pretty big picture.
Like, what can we actually do?
But honestly, a lot of this comes down
to individual choices we make.
Really?
How so?
Think about it.
Every time you're on social media, searching online,
buying something, those sites recommending stuff,
you're creating data.
And that data, it feeds these AI systems.
Right.
Our data is valuable.
That's not new.

(15:34):
But I guess I didn't think about it like,
we're not just using AI.
We're kind of shaping it as we go.
Exactly.
So that means we have some power here.
Are we cool with how our data is used?
Do we want to support companies that are doing AI ethically?
Those choices, millions of people making them,
that sends a message.
Voting with our wallets, but also voting with our data.
Exactly.
And beyond that, staying informed, speaking up,

(15:57):
talking to friends, family, writing to our representatives.
The more we bring up these AI safety issues,
the harder it is to ignore them.
More people aware, more eyes watching those
who need to be held accountable.
That's the idea.
Push for solutions that work for everyone,
not just the tech companies.
It's a long game, no doubt.
But change starts with individual voices, doesn't it?

(16:19):
OK.
Yeah, I'm seeing how it all connects now.
The whistleblowers sounding the alarm,
the policymakers trying to figure it out,
and then us, regular people, actually having some influence.
And don't underestimate staying curious.
AI changes so fast, even the pros
are always playing catch up, reading articles,
listening to podcasts like Aislethana.
It helps us make smarter choices.

(16:41):
Speaking of Aislethana, any examples
they gave about people actually making a difference?
Any stories stick out?
Oh, yeah, there was this one about a group
of high school students.
High school?
Wow, yeah.
They built an app, uses AI to help people spot bias
in online content.
And then you can report it through the app.
That's incredible.
They're already out there doing this stuff.

(17:01):
It just shows you, you don't have to be a genius
or a politician to have an impact.
They saw a problem, learned about AI,
and used it to make things better.
That's pretty inspiring.
Gives you hope.
Big time.
With all the craziness, all the uncertainty,
it's good to remember there are people out there who
want to use this tech for good.
And they're finding really creative ways to do it.

(17:24):
This whole deep dive.
I went from feeling overwhelmed to now feeling like, OK,
maybe I can actually do something.
That's what we like to hear.
Knowledge is power, right?
Now you've got the tools to make sense of this stuff.
And you know, it's funny.
We talk about AI, this cutting edge tech,
but it comes down to really human stuff.
Our choices, what we care about, what we're

(17:46):
willing to put up with, what kind of future we want.
It's not just the tech, it's us.
Couldn't have said it better myself.
AI's future, it's a reflection of our own humanity.
And on that note, I think it's time
to wrap up this deep dive.
Yeah, it's been a pleasure exploring all this with you.
Thanks so much for all your insights.
And to our listener, we hope you found this journey
as fascinating as we have.

(18:06):
Keep exploring.
Keep asking those tough questions.
And remember, you have a voice in all of this,
in shaping the future of AI.
Until next time, keep learning, keep sharing,
and keep diving deep.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.