I have to roll my eyes at the constant click bait headlines on technology and ethics. If we want to get anything done, we need to go deeper. That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.
We often defer to the judgment of experts. I usually defer to my doctor’s judgment when he diagnoses me, I defer to quantum physicists when they talk to me about string theory, etc. I don’t say “well, that’s interesting, I’ll take it under advisement” and then form my own beliefs. Any beliefs I have on those fronts I replace with their beliefs. But what if an AI “knows” more than us? It is an authority in the field in which we’re q...
Are claims about AI destroying humanity just more AI hype we should ignore? My guests today, Risto Uuk and Torben Swoboda assess three popular arguments for why we should dismiss them and focus solely on the AI risks that are here today. But they find each argument flawed, arguing that, unless some fourth powerful argument comes along, we should devote resources to identifying and avoiding potential existential risks to humanity po...
I have to admit, AI can do some amazing things. More specifically, it looks like it can perform some impressive intellectual feats. But is it actually intelligent? Does it understand? Or is it just really good at statistics? This and more in my conversation with Lisa Titus, former professor of philosophy at the University of Denver and now AI Policy Manager at Meta. Originally aired in season one.
By the end of this crash course, you’ll understand a lot about the AI ethics landscape. Not only will it give you your bearings, but it will also enable you to identify what parts of the landscape you find interesting so you can do a deeper dive.
People want AI developed ethically, but is there actually a business case for it? The answer better be yes since, after all, it’s businesses that are developing AI in the first place. Today I talk with Dennis Hirsch, Professor of Law and Computer Science at Ohio State University, who is conducting empirical research on this topic. He argues that AI ethics - or as he prefers to call it, Responsible AI - delivers a lot of bottom line...
Automation is great, right? It speeds up what needs to get done. But is that always a good thing? What about in the process of scientific discovery? Yes, AI can automate a lot of science by running thousands of virtual experiments and generating results - but is something lost in the process? My guest, Ramón Alvarado a professor of philosophy and a member of the Philosophy and Data Science Initiative at the University of Oregon, th...
Behind all those algorithms are the people who create them and embed them into our lives. How did they get that power? What should they do with it? What are their responsibilities? This and more with my guest Chris Wiggins, Chief Data Scientist at the New York Times, Associate Professor of Applied Mathematics at Columbia University, and author of the book “How Data Happened: A History from the Age of Reason to the Age of Algorithms...
People in the AI safety community are laboring under an illusion, perhaps even a self-deception, my guest argues. They think they can align AI with our values and control it so that the worst doesn’t happen. But that’s impossible. We can never know how AI will act in the wild any more than we can know how our children will act once they leave the house. Thus, we should never give more control to an AI than we would give an individu...
Developers are constantly testing how online users react to their designs. Will they stay longer on the site because of this shade of blue? Will they get depressed if we show them depressing social media posts? What happens if we intentionally mismatch people on our dating website? When it comes to shades of blue, perhaps that’s not a big deal. But when it comes to mental health and deceiving people? Now we’re in ethically choppy w...
There’s a picture in our heads that’s overly simplistic and the result is not thinking clearly about AI risks. Our simplistic picture is that a team develops AI and then it gets used. The truth, the more complex picture, is that 1000 hands touch that AI before it ever becomes a product. This means that risk identification and mitigation is spread across a very complex supply chain. My guest, Jason Stanley, is at the forefront...
From the best of season 1: Microsoft recently announced an (alleged!) breakthrough in quantum computing. But what in the world is quantum computer, what can they do, and what are the potential ethical implications of this new powerful tech?
Brian and I discuss these issues and more. And don’t worry! No knowledge of physics required.
Every specialist in anything thinks they should have a seat at the AI ethics table. I’m usually skeptical. But psychologist Madeline Reinecke, Ph.D. did a great job defending her view that – you guessed it – psychologists should have a seat at the AI ethics table.
Our conversation ranged from the role of psychologists in creating AI that supports healthy human relationships to when children start and stop attributing sentience to r...
A fun format for this episode. In Part I, I talk about how I see agentic AI unfolding and what ethical, social, and political risks come with it. In part II, Eric Corriel, digital strategist at the School of Visual Arts and a close friend, tells me why he thinks I’m wrong. Debate ensues.
Jaahred Thomas is a VC friend of mine who wanted to talk about the evolving landscape of AI ethics in startups and business generally. So rather than have a normal conversation like people do, we made it an episode! Jaahred asks me a bunch of questions about AI ethics and startups, investors, Fortune 500 companies, and more, and I tell him the unvarnished truths about where corporate America is in the AI ethics journey and wh...
From the best of season 1: The hospital faced an ethical question: should we deploy robots to help with elder care?
Let’s look at a standard list of AI ethics values: justice/fairness, privacy, transparency, accountability, explainability.
But as Ami points out in our conversation, that standard list doesn’t include a core value at the hospital: the value of caring.
And that’s one example of one of three objections to a view he calls...
From the best of season 1: Innovation is great…but hype is bad. Not only has all this talk of innovation not increased innovation, but it also creates a bad environment in which leaders can make reasoned judgments about where to devote resources. So says Lee Vinsel, professor in the Department of Science, Technology and Society at Virgina Tech, in this Ethical Machines episode.
ALSO! We want proactive regulations before the sh!t hit...
“Sustainability,” “purpose/mission/value driven”, “human-centric design.” These are terms companies use so they don’t have to say “ethics.” My contention is that this is bad for business and bad for society at large. Our world, corporate and otherwise, is confronted with a growing mountain of ethical problems, spurred on by technologies that bring us fresh new ways of realizing our familiar ethical nightmares. These issues do not d...
Democracy is about how we ought to distribute power in society and, more specifically, it’s the claim that people ought to have a significant say in how they are ruled. So if we’re talking about AI’s impact on democracy, we should focus on how our use of AI interact with our concern that democracy be respected, upheld, and improved.
But my guest Ted Lechterman, UNESCO Chair in AI Ethics and Governance at IE University’s School of ...
From the best of season 1.
Well, I didn’t see this coming. Talking about legal and philosophical conceptions of copyright turns out to be intellectually fascinating and challenging. It involves not only concepts about property and theft, but also about personhood and invasiveness. Could it be that training AI with author/artist work violates their self?
I talked with Darren Hick about all this, who wr...
My guest and I have been doing AI governance for businesses for a combined 17+years. We started way before genAI was a big thing. But I’d say I’m more a qualitative guy and he’s more quant. Nick Elprin is the CEO of an AI governance software company, after all. How has AI ethics or AI governance evolved over that time and what does cutting edge governance look like? Perhaps you’re about to find out…
Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com
The latest news in 4 minutes updated every hour, every day.
An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.
The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.
Listen to 'The Bobby Bones Show' by downloading the daily full replay.