Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.
This week, Alex and Emily talk with anthropologist and immigration lawyer Petra Molnar about the dehumanizing hype of border-enforcement tech. From hoovering up data to hunt anyone of ambiguous citizenship status, to running surveillance of physical borders themselves, "AI" tech is everywhere in the enforcement of national borders. And as companies ranging from Amazon, to NSO Group, to Palantir all profit, this widening o...
Emily and Alex pore through an elaborate science fiction scenario about the "inevitability" of Artificial General Intelligence or AGI by the year 2027 - which rests atop a foundation of TESCREAL nonsense, and Sinophobia to boot.
References:
Fresh AI Hell:
AI persona bots for undercover cops
Palantir heart eyes Keir Starmer
Anti-vaxxers are grifting off the measles outbreak with AI-formulated supplements
It's been 4 months since we've cleared the backlog of Fresh AI Hell and the bullshit is coming in almost too fast to keep up with. But between a page full of awkward unicorns and a seeming slowdown in data center demand, Alex and Emily have more good news than usual to accompany this round of catharsis.
AI Hell:
LLM processing like human language processing (not)
After "AI" stopped meaning anything, the hype salesmen moved on to "AI" "agents", those allegedly indefatigable assistants, allegedly capable of operating your software for you -- whether you need to make a restaurant reservation, book a flight, or book a flight to a restaurant reservation. Hugging Face's Margaret Mitchell joins Emily and Alex to help break down what agents actually are, and what ...
Measuring your talk time? Counting your filler words? What about "analyzing" your "emotions"? Companies that push LLM technology to surveil and summarize video meetings are increasingly offering to (purportedly) analyze your participation and assign your speech some metrics, all in the name of "productivity". Sociolinguist Nicole Holliday joins Alex and Emily to take apart claims about these "AI&q...
Emily and Alex read a terrible book so you don't have to! Come for a quick overview of LinkedIn co-founder and venture capitalist Reid Hoffman's opus of magical thinking, 'Superagency: What could possibly go right with our AI future' -- stay for the ridicule as praxis. Plus, why even this tortuous read offers a bit of comfort about the desperate state of the AI boosters.
References:
In the weeks since January 20, the US information ecosystem has been unraveling fast. (We're looking at you Denali, Gulf of Mexico, and every holiday celebrating people of color and queer people that used to be on Google Calendar.) As the country's unelected South African tech billionaire continues to run previously secure government data through highly questionable LLMs, academic librarian Raina Bloom joins Emily and Ale...
Sam Altman thinks fusion - particularly a company he's personally invested in - can provide the energy we "need" to develop AGI. Meanwhile, what if we just...put data centers on the Moon to save energy? Alex, Emily, and guest Tamara Kneese pour cold water on Silicon Valley's various unhinged, technosolutionist ideas about energy and the environment.
Dr. Tamara Kneese is director of climate, technology and justice...
In January, the United Kingdom's new Labour Party prime minister, Keir Starmer, announced a new initiative to go all in on AI in the hopes of big economic returns, with a promise to “mainline” it into the country’s veins: everything from offering public data to private companies, to potentially fast-tracking miniature nuclear power plants to supply energy to data centers. UK-based researcher Gina Neff helps explain why this fl...
Not only is OpenAI's new o3 model allegedly breaking records for how close an LLM can get to the mythical "human-like thinking" of AGI, but Sam Altman has some, uh, reflections for us as he marks two years since the official launch of ChatGPT. Emily and Alex kick off the new year unraveling these truly fantastical stories.
References:
OpenAI o3 Breakthrough High Score on ARC-AGI-Pub
From the blog of Sam Altman: Reflecti...
It’s been a long year in the AI hype mines. And no matter how many claims Emily and Alex debunk, there's always a backlog of Fresh AI Hell. This week, another whirlwind attempt to clear it, with plenty of palate cleansers along the way.
Fresh AI Hell:
Part I: Education
Medical residency assignments
"AI generated" UCLA course
"Could ChatGPT get an engineering degree?"
AI let...
Once upon a time, artificial general intelligence was the only business plan OpenAI seemed to have. Tech journalist Brian Merchant joins Emily and Alex for a time warp to the beginning of the current wave of AI hype, nearly a decade ago. And it sure seemed like Elon Musk, Sam Altman, and company were luring investor dollars to their newly-formed venture solely on the hand-wavy promise that someday, LLMs themselves would figure out ...
From Bill Gates to Mark Zuckerberg, billionaires with no education expertise keep using their big names and big dollars to hype LLMs for classrooms. Promising ‘comprehensive AI tutors', or just ‘educator-informed’ tools to address understaffed classrooms, this hype is just another round of Silicon Valley pointing to real problems -- under-supported school systems -- but then directing attention and resources to their favorite ...
The company behind ChatGPT is back with bombastic claim that their new o1 model is capable of so-called "complex reasoning." Ever-faithful, Alex and Emily tear it apart. Plus the flaws in a tech publication's new 'AI hype index,' and some palette-cleansing new regulation against data-scraping worker surveillance.
References:
OpenAI: Learning to reason with LLMs
Technology journalist Paris Marx joins Alex and Emily for a conversation about the environmental harms of the giant data centers and other water- and energy-hungry infrastructure at the heart of LLMs and other generative tools like ChatGPT -- and why the hand-wavy assurances of CEOs that 'AI will fix global warming' are just magical thinking, ignoring a genuine climate cost and imperiling the clean energy transition in th...
Can “AI” do your science for you? Should it be your co-author? Or, as one company asks, boldly and breathlessly, “Can we automate the entire process of research itself?”
Major scientific journals have banned the use of tools like ChatGPT in the writing of research papers. But people keep trying to make “AI Scientists” a thing. Just ask your chatbot for some research questions, or have it synthesize some human subjects to save you ti...
Did your summer feel like an unending barrage of terrible ideas for how to use “AI”? You’re not alone. It's time for Emily and Alex to clear out the poison, purge some backlog, and take another journey through AI hell -- from surveillance of emotions, to continued hype in education and art.
Fresh AI Hell:
Dr. Clara Berridge joins Alex and Emily to talk about the many 'uses' for generative AI in elder care -- from "companionship," to "coaching" like medication reminders and other encouragements toward healthier (and, for insurers, cost-saving) behavior. But these technologies also come with questionable data practices and privacy violations. And as populations grow older on average globally, technology s...
The Washington Post is going all in on AI -- surely this won't be a repeat of any past, disastrous newsroom pivots! 404 Media journalist Samantha Cole joins to talk journalism, LLMs, and why synthetic text is the antithesis of good reporting.
References:
The Washington Post Tells Staff It’s Pivoting to AI: "AI everywhere in our newsroom."
Response: Defector Media Promotes Devin The Dugong To Chief AI Officer, Unvei...
Could this meeting have been an e-mail that you didn't even have to read? Emily and Alex are tearing into the lofty ambitions of Zoom CEO Eric Yuan, who claims the future is a LLM-powered 'digital twin' that can attend meetings in your stead, make decisions for you, and even be tuned to different parameters with just the click of a button.
References:
The CEO of Zoom wants AI clones in meetings
If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.
Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com
The latest news in 4 minutes updated every hour, every day.
If you eat, sleep, and breathe true crime, TRUE CRIME TONIGHT is serving up your nightly fix. Five nights a week, KT STUDIOS & iHEART RADIO invite listeners to pull up a seat for an unfiltered look at the biggest cases making headlines, celebrity scandals, and the trials everyone is watching. With a mix of expert analysis, hot takes, and listener call-ins, TRUE CRIME TONIGHT goes beyond the headlines to uncover the twists, turns, and unanswered questions that keep us all obsessed—because, at TRUE CRIME TONIGHT, there’s a seat for everyone. Whether breaking down crime scene forensics, scrutinizing serial killers, or debating the most binge-worthy true crime docs, True Crime Tonight is the fresh, fast-paced, and slightly addictive home for true crime lovers.
The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.