All Episodes

November 6, 2024 14 mins

In this episode of AI Uncorked by AiSultana, we dive into the complex intersection of artificial intelligence and democracy as the 2024 US Presidential election unfolds. Explore why campaigns, from the Harris-Walz team's cautious integration of basic AI tools to Trump’s more visible use of AI-generated content, are both embracing and scrutinizing these technologies. We'll discuss public apprehensions about AI’s role in misinformation, the fragmented regulatory responses across the US, and global comparisons with the EU and China’s distinct approaches to AI governance. Tune in to understand how this pivotal election is shaping, and being shaped by, AI's rapid evolution.

https://www.aisultana.com

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
OK, so you want to understand how AI actually played a role in the 2024 election.

(00:06):
Mm hmm.
We've got a mountain of sources here, articles, research, you name it.
And let me tell you, yeah, it wasn't quite the digital wild west
everyone was predicting. Right.
Remember all that talk about AI turning the election into total chaos?
Yeah, I felt like we were on the verge of some digital apocalypse. Right.
But it turned out AI was more of this undercurrent, quietly shaping things.

(00:29):
So more subtle influence than like a complete upheaval.
Exactly. What's interesting is that the big AI bogeyman
everyone was afraid of, the deepfakes. Yeah.
Well, they didn't really live up to all the hype.
Like they existed, but yeah, they were around, but not the game changer
many thought they'd be.
It was all over the news articles about how deepfakes could swing the election.
Do you know? Right.
It felt like we were bracing for this flood of fake videos.

(00:52):
So real you couldn't tell it was true anymore.
I remember thinking, are we going to need some kind of like truth
detector for every video we see?
But in reality, deepfakes were more like a few isolated incidents,
not the tsunami everyone feared.
Although there was this one case with a fake Joe Biden audio clip.

(01:12):
Oh, yeah, the one where he's supposedly telling people not to vote in New Hampshire.
You got it. Yeah.
That case really showed how AI can blur those lines between real and fake.
I remember hearing that and thinking, wait, did he actually say that?
It was so convincing. Exactly.
That's the thing about AI generated audio.
It can be incredibly realistic.
It was so believable that it actually led to criminal charges

(01:33):
and a ban on those AI generated robocalls. Wow.
So that one fake audio clip really kicked off a whole debate about AI in campaigns.
Oh, absolutely.
It sparked a huge conversation about the ethics, the legal side, all of it.
Like opening Pandora's box, unleashing this tech with all its potential,
but then having to deal with those unintended consequences. Right.
So deepfakes might have been more of a sideshow,

(01:55):
but AI was definitely being used behind the scenes by the campaigns themselves.
OK, so how were they using AI then?
Well, that's where things get really interesting.
It was like a political science experiment playing out in real time.
We saw two very different approaches.
You had the Harris campaign very publicly distancing themselves from AI.
Oh, right. They were all about transparency, saying things like,

(02:17):
we won't use AI to manipulate voters. Exactly.
They basically limited AI to backstage tasks.
Things like data analysis and website optimization.
I remember thinking that was pretty refreshing,
given how secretive some campaigns can be. Definitely.
Their message was all about building trust with voters,
and they didn't want AI to jeopardize that.

(02:39):
Smart move, especially given all the anxiety around AI in elections.
For sure. But then you had the Trump campaign taking a completely different tack.
Yeah, Trump seemed to embrace AI, even sort of flaunted it at times.
There was that one time he bragged about using AI
to rewrite an entire speech in like 15 seconds.
Oh, right. I remember that.
I think he was trying to project this image of being the tech savvy candidate,

(03:01):
even if it meant, you know, stirring up some controversy.
Definitely raised some eyebrows.
And while his campaign was officially vague about how much they were using AI,
his actions spoke pretty loudly.
It definitely makes you wonder what else they were using AI for behind closed doors.
Exactly. Especially when you consider that over 30 tech companies

(03:22):
were pitching all sorts of AI tools to campaigns.
Wow, 30.
Yeah. But most of the campaigns kept quiet about their AI strategies.
It feels like we're just seeing the tip of the iceberg here.
Right. Before we get lost in the conspiracy theories, though,
I think it's worth talking about how the public felt about all this.
Yeah, good point.
Because there's this fascinating disconnect between how AI was actually used

(03:43):
and the level of fear people were expressing.
Absolutely.
Polls were showing this widespread anxiety about AI's potential
to mess with our democracy.
57 percent of adults were worried about AI generated fake news.
And a good chunk believed AI would be used for harmful purposes in the election.
So people were expecting this massive wave of AI disruption, but.

(04:05):
But it turned out to be more of a strong undercurrent, quietly shaping things.
Which begs the question, did we just get lucky this time around?
Or are we underestimating AI's potential to really shake things up in future elections?
That is the million dollar question, isn't it?
And that leads us directly into the next big part of this story.

(04:26):
How do we control this AI beast?
How do we make sure it's used responsibly, not just in elections,
but in every aspect of our lives?
OK, so buckle up, listeners, because we're about to wade
into the wild world of AI regulation.
This is where things get really interesting.
Absolutely. So AI regulation, it's a bit of a tangled web, isn't it?
Yeah, feels like we're trying to map a landscape that's constantly shifting.

(04:49):
Exactly. But getting a handle on these different approaches
is key to understanding where AI is headed, not just in elections, but everywhere.
OK, so you're saying AI regulation is kind of a moving target.
Let's start with the EU's AI Act, which I know you've mentioned is pretty far reaching.
What's the thinking behind it?
It's like imagine a parent setting boundaries for their kid, right?
Trying to keep them safe, but also letting them explore and grow.

(05:11):
That's kind of how the EU is approaching this.
They're not trying to stifle innovation, but they are being proactive
about making sure AI develops ethically.
So not banning it outright, but setting some guardrails.
Right. The EU AI Act sorts AI systems based on their risk level,
like a triage system at a hospital.
So higher risk means stricter rules.

(05:33):
Makes sense. What does that look like in practice?
The high risk AI systems, things like those used in health care or law enforcement,
they face way stricter rules than, say, an AI that's recommending movies.
Right, because the stakes are higher.
So what does a company actually have to do to comply if they're developing one of these high risk systems?
It's not just, oh, we built a cool AI, let's launch it.

(05:53):
They have to show that their system is safe, unbiased and transparent.
That means risk assessments, detailed documentation,
even having humans in the loop to oversee the AI's decisions.
So it's a pretty rigorous process.
But how do they actually enforce these rules?
What's stopping a company from just cutting corners?
Well, the EU can hit companies with some hefty fines if they violate the AI Act.

(06:17):
We're talking millions of euros, potentially even a percentage of their global revenue.
OK, yeah, I bet that gets their attention.
So with the EU, it's about setting clear rules and making sure everyone's playing by them.
How does that contrast with the US approach, which you called more fragmented?
The US, think of it like a patchwork quilt with each state kind of sewing its own piece without a clear overall design.

(06:40):
There's no single federal law like the EU's AI Act.
Ah, so you've got this situation where one state might have strict rules on something like facial recognition,
while another state has basically nothing.
Exactly. It's a real mixed bag.
So is there any federal oversight at all?
There are some federal agencies working on AI guidelines, but it's more like suggestions than actual rules.

(07:02):
So it's a bit of a wild west then.
I imagine that creates a lot of uncertainty for companies.
Yeah, it can definitely stifle innovation because companies are facing this inconsistent landscape.
It's like trying to build a house when the building codes change every few miles.
It sounds like a logistical nightmare.
And doesn't that raise concerns about whether this approach is enough to protect people's rights and safety?

(07:23):
That's the tradeoff, right?
The US model might lead to faster innovation, but it also risks more loopholes and potential ethical pitfalls.
OK, so we've got the EU's cautious approach and the US's more hands-off, innovation-focused approach.
What about China? Where do they fit into all of this?
China is a whole different ballgame.
Their AI strategy is very much tied to their government and their national goals.

(07:47):
They're investing heavily in AI, but the focus is on economic growth and maintaining social control.
Less about ethics and more about how AI can serve the state.
So when we're talking about AI regulation in China, it's not just about safety or fairness.
It's about using AI as a tool for control.
Think of it this way.
The EU is trying to build a safe and ethical playground for AI,

(08:08):
while China is building a meticulously controlled garden where AI grows only in ways that serve the government.
They're using AI for things like surveillance, censorship, even predicting and preventing social unrest.
It sounds both impressive and a little unsettling, honestly.
Yeah.
OK, so three very different players, three very different approaches to AI.

(08:29):
It's like a global race to shape the future of this technology.
It is a fascinating race to watch because the winner will have a huge impact on how AI is developed and used in the future.
It really begs the question, what does responsible AI development even look like and who gets to decide?
These are questions we're only just beginning to grapple with.
Yeah, it's a lot to process.

(08:50):
We've talked about the big picture, but can you give me a concrete example of how these different approaches might play out in the real world?
Let's say there's a company that develops an AI to help judges make sentencing decisions in criminal cases.
OK, I can see where this is going.
In the EU, that would almost certainly be classified as a high risk system under the AI Act.
Right, because you're talking about potentially impacting people's lives in a really significant way.

(09:13):
Exactly.
So in the EU, the company would have to prove that their AI isn't biased against certain groups,
that its decision making process is transparent and that there are human checks and balances in place.
They can't just unleash this AI into the courtroom and hope for the best.
So the EU is basically saying, prove to us that your AI is fair and just before you even think about using it in a sensitive context like that.

(09:38):
Exactly.
But now imagine a different scenario in the US with its more patchwork approach.
Let's say a tech startup develops facial recognition software that's incredibly accurate, but also raises privacy concerns.
OK, so potentially beneficial, but also ethically tricky.
Right. In the EU, they'd likely face strict regulations, maybe even a ban.

(09:59):
But in the US, it's a whole different story.
No federal law specifically on facial recognition, so it's a free for all, depending on which state you're in.
The startup might face tight restrictions in one state, but have almost no oversight in another.
So for companies, it's a mix of opportunities and ethical dilemmas.
They have to be really strategic about where they develop and use certain AI technologies.

(10:20):
It's like playing a game of chess where the rules change depending on which square you land on.
It all boils down to how we balance innovation with ethical considerations.
AI development is a lot more complex than just writing some code and calling it a day.
Absolutely. It's intertwined with politics, economics, social values, everything.
And the choices we make now will shape the world we live in tomorrow.

(10:45):
OK, we've explored these different approaches to AI regulation, and it seems like there's no easy answer.
But I think it's time to zoom out a bit and ask, what does this all mean?
What are the bigger implications of AI for humanity?
You're right. It's time to get philosophical.
This isn't just about tech. It's about society.
AI is forcing us to confront some fundamental questions about what it means to be human in a world where machines are becoming more and more capable.

(11:10):
So where were we?
Right. We were talking about the big picture, the long term implications of AI, the stuff that keeps you up at night.
Yeah, exactly. I mean, one of the most pressing concerns has got to be the future of work, right?
The robots taking our jobs scenario.
Yeah, but it's starting to feel less like science fiction and more like, well, a real possibility.
It's not just factory jobs or truck drivers we're talking about anymore either.

(11:32):
Exactly. We're seeing AI making its way into fields like law, medicine, even creative fields.
Think about it. AI diagnosing diseases better than human doctors, writing legal briefs that are flawless, composing music that rivals the greats.
It's kind of mind blowing, honestly, and a little scary too.
Right. It makes you wonder what happens to our sense of purpose when machines can do so much of what we do and do it better?

(11:57):
What will we do all day? What happens to our sense of self-worth?
Exactly. It's not just about losing a paycheck. It's about like our identity.
How do we find meaning in a world where our skills and expertise are being outpaced by machines?
It feels like we're at this point where we need to figure out a whole new social contract, one that takes into account this new relationship we have with machines.

(12:18):
We're at a crossroads. Do we try to control them, compete with them, work with them? What's the best way forward?
So it's not about rejecting AI completely.
No, not at all. It's about making sure it develops in a way that aligns with our values, you know?
But it feels so massive. Like, how do we even begin to address that?
What can regular people even do to make sure AI is used ethically?

(12:42):
It does feel like a lot of those decisions are being made in these tech company boardrooms and government offices, places that feel so far removed from our daily lives.
I know it can feel overwhelming, but I honestly believe we all have a part to play here.
First things first, stay informed. Read up on this stuff. Listen to podcasts like this one.
Talk about it with people. The more you understand about AI, the better equipped you'll be to have a voice in this whole thing.

(13:05):
So knowledge is power. But what about taking action? What can we do beyond just reading and talking?
Don't underestimate the power of your voice. Seriously.
Talk to your elected officials. Let them know what you think about AI regulations.
There are organizations out there fighting for ethical AI development.
Support them and talk to your friends, family, everyone about the kind of future you want to see.

(13:28):
These conversations might feel small, but they matter.
It's about recognizing that we're not just along for the ride with this technology. We can actually shape it.
Exactly. The future isn't something that just happens to us. It's something we create.
It feels like we're standing at the edge of something huge, looking at a future that could be amazing, but also has some risks.

(13:50):
And that's exactly why we need to be having these conversations, asking tough questions, demanding transparency from the people developing and using AI.
Well, on that note, I think it's time to wrap up this deep dive.
We've explored how AI played a role in the 2024 election, the global race to regulate it,
and all the big philosophical questions it raises about the future of work, society, even what it means to be human.

(14:13):
It's been a journey, and I hope everyone listening has come away with a better understanding of just how complex this issue is
and feels empowered to be a part of shaping its future.
Me too. If there's one thing I want people to take away from all of this,
it's that the future isn't set in stone. It's up to us to shape it with every choice we make.
So stay curious, stay informed, and stay engaged.

(14:35):
The future of AI, and maybe even humanity itself, is in our hands.
And on that hopeful note, we'll sign off.
Thanks for joining us on this deep dive, and until next time, keep exploring the world with open minds.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.