Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Adam N2 (00:05):
Welcome to Digimasters
Shorts, we are your hosts Adam
Nagus
Carly W (00:09):
and Carly Wilson
delivering the latest scoop from
the digital realm.
The BBC has threatened legalaction against Perplexity AI
over claims that its AI modelswere trained using BBC content
without permission.
A letter was sent toPerplexity's C.E.O, Aravind
Srinivas, demanding an end toscraping BBC content and
deletion of any copies held.
(00:29):
The BBC is seeking financialcompensation and has warned of
an injunction if its demands arenot met.
This move follows industry-wideconcerns about AI firms using
copyrighted work withoutauthorization.
BBC Director General Tim Daviehas called for stronger
intellectual propertyprotections to safeguard
national content.
Rupert Murdoch's Dow Jones hasalso filed a lawsuit against
(00:51):
Perplexity for alleged illegalcopying.
Perplexity denies the claims,calling them manipulative and
asserting it does not build ortrain foundation models itself.
The BBC argues Perplexity's toolcompetes directly with its
services by bypassing useraccess to official content.
In response, the U.K governmentis reviewing copyright laws
(01:12):
related to AI, with a promisethat creative industries will
not be harmed.
Several major publishers haveentered licensing agreements
with AI companies, highlightingthe growing tension over content
use in AI development.
Adam N2 (01:24):
A new study from
Germany’s Hochschule München
University of Applied Sciencesreveals the significant
environmental impact of largelanguage models, or L.L.Ms, used
in AI tools like Chat G.P.T.
Researchers analyzed 14different L.L.Ms by asking each
1,000 benchmark questions andmeasuring the carbon emissions
generated from their tokenoutputs.
(01:45):
Models with advanced reasoningcapabilities generated up to 50
times more CO2 emissions thansimpler text-only models.
The study found that whilelarger, more complex models tend
to be more accurate, they alsoconsume substantially more
energy and produce higheremissions.
For example, Deep Cogito 70B,the most accurate tested model
with 84.9% accuracy, emittedthree times the CO2 of
(02:09):
similar-sized but less complexmodels.
Deepseek’s R1 70B reasoningmodel produced emissions
equivalent to a 15-kilometer cartrip per quiz, yet its accuracy
was lower at 78.9%.
Meanwhile, smaller models likeAlibaba’s Qwen 7B were far more
energy-efficient but lessaccurate.
The findings highlight a cleartrade-off between AI accuracy
(02:32):
and sustainability.
Researchers urge users to adoptenergy-efficient practices by
limiting the use ofhigh-capacity models when
possible and requesting conciseresponses to reduce emissions.
Ultimately, this study stressesthe growing need for more
environmentally friendly AItechnologies as their use
becomes widespread.
Cybersecurity in artificialintelligence is evolving to
(02:53):
address complex softwarevulnerabilities through
automated reasoning and deepcode analysis.
Traditional benchmarks fallshort, often relying on small
codebases and simplified tasksthat do not reflect the
complexity of real-worldsystems.
To bridge this gap, researchersat UC Berkeley developed
CyberGym, a comprehensivebenchmarking tool with over
(03:14):
1,500 tasks derived from actualvulnerabilities in open-source
projects.
Each task includes fullcodebases, executable programs,
and vulnerability descriptions,requiring AI agents to generate
proof-of-concept exploits.
CyberGym introduces fourdifficulty levels that gradually
increase the challenge by addingmore contextual information
(03:34):
about the vulnerabilities.
Testing revealed that current AIagents like OpenHands paired
with Claude-3.7-Sonnet canreproduce only a fraction of
vulnerabilities, with successrates dropping sharply for
longer exploits.
Notably, richer inputs improvedperformance, yet overall
effectiveness remained limited,highlighting the difficulty of
real-world security tasks.
(03:56):
Despite these challenges, AIagents discovered new zero-day
vulnerabilities, demonstratingpotential for future
applications.
This research underscores theneed for robust evaluation
frameworks to better assess andimprove A.I's role in
cybersecurity.
CyberGym sets a new standard fortesting AI agents’ capabilities
in complex software securityenvironments.
Carly W (04:17):
Perplexity AI has
launched a new Text-to-Video
feature on X, allowing users togenerate videos from static
images with accompanying audio.
The feature quickly gainedpopularity, with users creating
videos of influencers enjoyingtraditional Indian snacks like
samosas and chai.
However, the AI refuses certainprompts deemed inappropriate or
sensitive, such as requestsinvolving political figures or
(04:40):
stereotypical representations.
While playful and quirky promptslike animated sponge and
starfish characters areaccepted, some culturally
specific videos are blockedwithout explanation.
Founded in 2022 by AndyKonwinski, Johnny Ho, Denis
Yarats, and IIT Madras alumnusAravind Srinivas, Perplexity AI
has been nicknamed the"GoogleSearch Killer." Aravind
(05:03):
Srinivas, the C.E.O, previouslyworked at Google DeepMind, Open
A.I, and Google Brain.
The company employs around 700people and is valued at$14
billion, backed by investorsincluding Jeff Bezos and Nvidia.
Despite its rapid growth,Perplexity has faced criticism
for allegedly scrapingproprietary content without
proper attribution, leading tolegal challenges from Forbes and
(05:26):
The New York Post.
Additionally, Wired revealed in2024 that the firm bypassed
website restrictions meant toprevent unauthorized data
mining.
Perplexity AI continues toinnovate amid scrutiny and
industry competition.
Meta C.E.O Mark Zuckerberg isaccelerating his aggressive
hiring spree in artificialintelligence, recently securing
(05:46):
key talent from notablestartups.
Following a$14.3 billioninvestment in Scale AI to bring
onboard founder Alexandr Wang,Meta has now recruited Daniel
Gross and former GitHub C.E.ONat Friedman.
Gross leads SafeSuperintelligence alongside Ilya
Sutskever, who declined Meta'sacquisition and recruitment
attempts earlier this year.
(06:07):
Gross and Friedman will joinMeta to work under Wang, while
Meta gains a stake in theirventure capital firm, NFDG.
This move intensifies the fiercecompetition among tech giants
like Meta, Google, and Open A.Iin the race to develop
artificial general intelligence.
Open A.I C.E.O Sam Altmandisclosed that Meta has offered
signing bonuses up to$100million to lure talent, yet top
(06:29):
Open A.I employees remain loyal.
Meanwhile, Open A.I has invested$6.5 billion in hiring and
acquisitions, including designerJony Ive’s startup.
Other notable AI talent movesinclude Google re-acquiring
founders of Character.AI andMicrosoft recruiting DeepMind
co-founder Mustafa Suleyman.
Gross brings valuable experiencefrom Apple and Y Combinator,
(06:51):
while Friedman has led multiplestartups and was GitHub’s C.E.O.
Meta promises to reveal moredetails soon about its expanding
superintelligence team andefforts.
Don (07:01):
Thank you for listening to
today's AI and Tech News podcast
summary...
Please do leave us a comment andfor additional feedback, please
email us atpodcast@digimasters.co.uk You
can now follow us on Instagramand Threads by searching for
@DigimastersShorts or Search forDigimasters on Linkedin.
Be sure to tune in tomorrow anddon't forget to follow or
subscribe!