Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.
It's finally here! The AI Con: How to Fight Big Tech's Hype and Create the Future We Want hit the shelves in May. In this special bonus episode, Alex and Emily speak to tech journalist Vauhini Vara at one of the book's online launch events, where they covered the misleading nature of the term "artificial intelligence," why the use of tools like ChatGPT will only ever cheapen human labor and enrich the alrea...
Because Sam Altman hates opening his laptop, OpenAI is merging with iPhone guy Jony Ive's design firm in the name of some mysterious new ChatGPT-enabled consumer products: Alex and Emily go full Mystery Science Theater and dissect the announcement video. Plus how tech billionaires like Sam Altman mythologize San Francisco while their money makes it less livable for everyone else.
References:
This week, Alex and Emily talk with anthropologist and immigration lawyer Petra Molnar about the dehumanizing hype of border-enforcement tech. From hoovering up data to hunt anyone of ambiguous citizenship status, to running surveillance of physical borders themselves, "AI" tech is everywhere in the enforcement of national borders. And as companies ranging from Amazon, to NSO Group, to Palantir all profit, this widening o...
Emily and Alex pore through an elaborate science fiction scenario about the "inevitability" of Artificial General Intelligence or AGI by the year 2027 - which rests atop a foundation of TESCREAL nonsense, and Sinophobia to boot.
References:
Fresh AI Hell:
AI persona bots for undercover cops
Palantir heart eyes Keir Starmer
Anti-vaxxers are grifting off the measles outbreak with AI-formulated supplements
It's been 4 months since we've cleared the backlog of Fresh AI Hell and the bullshit is coming in almost too fast to keep up with. But between a page full of awkward unicorns and a seeming slowdown in data center demand, Alex and Emily have more good news than usual to accompany this round of catharsis.
AI Hell:
LLM processing like human language processing (not)
After "AI" stopped meaning anything, the hype salesmen moved on to "AI" "agents", those allegedly indefatigable assistants, allegedly capable of operating your software for you -- whether you need to make a restaurant reservation, book a flight, or book a flight to a restaurant reservation. Hugging Face's Margaret Mitchell joins Emily and Alex to help break down what agents actually are, and what ...
Measuring your talk time? Counting your filler words? What about "analyzing" your "emotions"? Companies that push LLM technology to surveil and summarize video meetings are increasingly offering to (purportedly) analyze your participation and assign your speech some metrics, all in the name of "productivity". Sociolinguist Nicole Holliday joins Alex and Emily to take apart claims about these "AI&q...
Emily and Alex read a terrible book so you don't have to! Come for a quick overview of LinkedIn co-founder and venture capitalist Reid Hoffman's opus of magical thinking, 'Superagency: What could possibly go right with our AI future' -- stay for the ridicule as praxis. Plus, why even this tortuous read offers a bit of comfort about the desperate state of the AI boosters.
References:
In the weeks since January 20, the US information ecosystem has been unraveling fast. (We're looking at you Denali, Gulf of Mexico, and every holiday celebrating people of color and queer people that used to be on Google Calendar.) As the country's unelected South African tech billionaire continues to run previously secure government data through highly questionable LLMs, academic librarian Raina Bloom joins Emily and Ale...
Sam Altman thinks fusion - particularly a company he's personally invested in - can provide the energy we "need" to develop AGI. Meanwhile, what if we just...put data centers on the Moon to save energy? Alex, Emily, and guest Tamara Kneese pour cold water on Silicon Valley's various unhinged, technosolutionist ideas about energy and the environment.
Dr. Tamara Kneese is director of climate, technology and justice...
In January, the United Kingdom's new Labour Party prime minister, Keir Starmer, announced a new initiative to go all in on AI in the hopes of big economic returns, with a promise to “mainline” it into the country’s veins: everything from offering public data to private companies, to potentially fast-tracking miniature nuclear power plants to supply energy to data centers. UK-based researcher Gina Neff helps explain why this fl...
Not only is OpenAI's new o3 model allegedly breaking records for how close an LLM can get to the mythical "human-like thinking" of AGI, but Sam Altman has some, uh, reflections for us as he marks two years since the official launch of ChatGPT. Emily and Alex kick off the new year unraveling these truly fantastical stories.
References:
OpenAI o3 Breakthrough High Score on ARC-AGI-Pub
From the blog of Sam Altman: Reflecti...
It’s been a long year in the AI hype mines. And no matter how many claims Emily and Alex debunk, there's always a backlog of Fresh AI Hell. This week, another whirlwind attempt to clear it, with plenty of palate cleansers along the way.
Fresh AI Hell:
Part I: Education
Medical residency assignments
"AI generated" UCLA course
"Could ChatGPT get an engineering degree?"
AI let...
Once upon a time, artificial general intelligence was the only business plan OpenAI seemed to have. Tech journalist Brian Merchant joins Emily and Alex for a time warp to the beginning of the current wave of AI hype, nearly a decade ago. And it sure seemed like Elon Musk, Sam Altman, and company were luring investor dollars to their newly-formed venture solely on the hand-wavy promise that someday, LLMs themselves would figure out ...
From Bill Gates to Mark Zuckerberg, billionaires with no education expertise keep using their big names and big dollars to hype LLMs for classrooms. Promising ‘comprehensive AI tutors', or just ‘educator-informed’ tools to address understaffed classrooms, this hype is just another round of Silicon Valley pointing to real problems -- under-supported school systems -- but then directing attention and resources to their favorite ...
The company behind ChatGPT is back with bombastic claim that their new o1 model is capable of so-called "complex reasoning." Ever-faithful, Alex and Emily tear it apart. Plus the flaws in a tech publication's new 'AI hype index,' and some palette-cleansing new regulation against data-scraping worker surveillance.
References:
OpenAI: Learning to reason with LLMs
Technology journalist Paris Marx joins Alex and Emily for a conversation about the environmental harms of the giant data centers and other water- and energy-hungry infrastructure at the heart of LLMs and other generative tools like ChatGPT -- and why the hand-wavy assurances of CEOs that 'AI will fix global warming' are just magical thinking, ignoring a genuine climate cost and imperiling the clean energy transition in th...
Can “AI” do your science for you? Should it be your co-author? Or, as one company asks, boldly and breathlessly, “Can we automate the entire process of research itself?”
Major scientific journals have banned the use of tools like ChatGPT in the writing of research papers. But people keep trying to make “AI Scientists” a thing. Just ask your chatbot for some research questions, or have it synthesize some human subjects to save you ti...
Did your summer feel like an unending barrage of terrible ideas for how to use “AI”? You’re not alone. It's time for Emily and Alex to clear out the poison, purge some backlog, and take another journey through AI hell -- from surveillance of emotions, to continued hype in education and art.
Fresh AI Hell:
Dr. Clara Berridge joins Alex and Emily to talk about the many 'uses' for generative AI in elder care -- from "companionship," to "coaching" like medication reminders and other encouragements toward healthier (and, for insurers, cost-saving) behavior. But these technologies also come with questionable data practices and privacy violations. And as populations grow older on average globally, technology s...
If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.
The latest news in 4 minutes updated every hour, every day.
Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com
The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!
I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!