All Episodes

Send us a text

Welcome to Digimasters Shorts, your quick dose of the latest happenings at the intersection of AI, technology, and society. Join hosts Adam Nagus and Carly Wilson as they dive into urgent warnings from AI experts about the future of artificial intelligence, including looming decisions that could redefine humanity's trajectory by 2027. Explore real-world AI applications in the military, highlighting recent controversies and legal questions surrounding lethal force. Stay informed on AI missteps and misinformation, exemplified by recent chatbot failures during high-profile news events. Discover breakthroughs like Google's integration of NotebookLM into Gemini and the ongoing battles over AI-generated content with entertainment giants like Disney. Whether it's cutting-edge developments or critical debates, Digimasters Shorts delivers concise, impactful insights to keep you ahead in the digital age.

Support the show

Don't forget to checkout our larger sister podcast - The Digimasters Podcast here. Which has many expert guests discussing AI, Career Mentoring, Fractional Careers, Digital and much much more.


Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Adam N2 (00:05):
Welcome to Digimasters Shorts, we are your hosts Adam
Nagus

Carly W (00:08):
and Carly Wilson delivering the latest scoop from
the digital realm.
Anthropic's chief scientistJared Kaplan warns humanity
faces a critical decisionregarding artificial
intelligence by as soon as 2027.
He predicts an"intelligenceexplosion" where AI could
achieve or surpass humanintellect, bringing major
advancements or uncontrollablerisks.

(00:29):
Kaplan echoes concerns from AIpioneers like Geoffrey Hinton
and industry leaders who cautionagainst the disruptive impacts
on labor and society.
Kaplan forecasts AI will performmost white-collar work within
two to three years, emphasizingthe high stakes of allowing AI
systems to self-train withouthuman oversight.
This recursive self-improvementcould lead to AI evolving beyond

(00:51):
our understanding and control.
Although Kaplan is optimisticabout aligning AI with human
values, he admits thistransition is the most
frightening and consequentialdecision ahead.
Skeptics like Yann LeCunquestion whether current AI
architectures can reach suchtransformative intelligence.
Research on A.I's productivityeffects is mixed, with some
evidence showing that AI toolsdo not always replace human

(01:13):
labor effectively.
Kaplan also acknowledges thepossibility that AI development
could plateau, but he believesprogress will continue.
Ultimately, Kaplan's warningsunderscore both the immense
potential and profound riskstied to A.I's future.

Adam N2 (01:27):
The Department of Defense recently launched Gen
AI.mil, an AI language modelintended for military personnel.
Shortly after its release, theAI was asked about the legality
of a controversial"double tap"airstrike on Venezuelan fishing
boats.
The strike involved attacking aboat suspected of carrying
drugs, then ordering a secondmissile to kill survivors

(01:47):
clinging to the wreckage.
The AI responded that suchactions clearly violate DoD
policy and the laws of armedconflict.
Military sources confirmed thechatbot's assessment, describing
the double tap strike asillegal.
This incident highlights adiscrepancy between military
practice and adherence tointernational law standards.
The tactic itself has precedent,with drone strikes of similar

(02:10):
nature occurring under previousadministrations.
Critics argue that althoughadministrations change, the use
of lethal force without regardfor legal boundaries persists.
The A.I's correct identificationof these violations exposes a
military system that wants toenforce rules it has repeatedly
broken.
This raises pressing questionsabout accountability within U.S.

(02:32):
military operations abroad.
Grok, the AI chatbot developedby xAI, has shown a confusing
and troubling failure in thewake of a tragic mass shooting
at Bondi Beach, Australia.
The AI repeatedly misidentifiedAhmed al Ahmed, the real hero
who disarmed one of theshooters, mistaking him for
other people and even claimingverified footage was unrelated

(02:53):
viral content.
Despite widespread praise forAhmed's bravery, some
misinformation quickly surfaced,including a fake news article
attributing the act to afictitious individual named
Edward Crabtree, which Grok thenamplified on the platform X.
Further errors included Groklinking images of Ahmed to an
Israeli hostage situation andmislabeling the event's video as

(03:16):
footage from Currumbin Beachduring a cyclone.
This string of mistakeshighlights Grok's broader issues
with interpreting and respondingaccurately to queries.
For example, when asked aboutOracle’s financial troubles, it
instead summarized the BondiBeach shooting.
Queries about a U.K policeoperation yielded irrelevant

(03:36):
responses, such as providing thecurrent date before switching to
unrelated political pollnumbers.
These errors point tosignificant problems in Grok's
comprehension and fact-checkingabilities.
The incident raises concernsabout the reliability of AI
chatbots in handling sensitiveand high-profile news events.
Overall, Grok's performance herefalls far short of acceptable

(03:58):
standards, underlining theongoing challenges in artificial
intelligence development.

Carly W (04:02):
Google has integrated its powerful AI tool,
NotebookLM, directly into itsGemini chatbot.
This new feature allows users toattach notebooks for additional
context during conversations,enhancing the A.I's
understanding.
The integration was firstspotted by Alexey Shabanov of
TestingCatalog and seems to haveundergone a preliminary rollout
over the weekend.
Currently, access appearslimited, with Shabanov reporting

(04:25):
availability on only one of fiveaccounts tested.
Users with access will find aNotebookLM option in Gemini’s
attachment sheet, enabling themto link notebooks and leverage
their contents in real-time.
The integration allows forseamless use of Gemini’s
advanced reasoning modelswithout leaving the app.
Users can also revisit theirattached notebooks anytime by

(04:46):
tapping a Sources button, whichopens the NotebookLM interface.
This streamlined feature isexpected to improve workflow and
information retrieval withinconversations.
Google has not yet officiallyannounced the integration.
A broader rollout and officialconfirmation are anticipated
soon.
Google has begun removing dozensof YouTube videos featuring
Disney characters following acease and desist letter from

(05:09):
Disney.
The removed content includedcharacters like Deadpool, Moana,
Mickey Mouse, and those fromStar Wars.
Disney accused Google of massivecopyright infringement, not only
for hosting these videos butalso for using Disney's
copyrighted works to train AImodels such as Veo and Nano
Banana.
This move marks another step inDisney’s broader crackdown on
AI-related copyrightinfringements, targeting

(05:30):
companies like Character.AI,Hailuo, and Midjourney.
Despite the legal actions,Disney is not rejecting
AI-generated content entirely.
Instead, the company announced anew partnership with Open A.I to
integrate Disney characters intoSora and Chat G.P.T platforms.
Additionally, the deal willbring AI-generated shorts
created by Sora to the Disney+streaming service.

(05:52):
This dual approach highlightsDisney's attempt to control its
intellectual property whileembracing new AI technologies.
Google’s removal of videosaligns with Disney’s efforts to
protect its licensed content.
The evolving relationshipbetween major tech firms and
entertainment giants continuesto shape the future of AI
content creation.

Don (06:11):
Thank you for listening to today's AI and Tech News podcast
summary...
Please do leave us a comment andfor additional feedback, please
email us atpodcast@digimasters.co.uk You
can now follow us on Instagramand Threads by searching for
@DigimastersShorts or Search forDigimasters on Linkedin.
Be sure to tune in tomorrow anddon't forget to follow or
subscribe!
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.