All Episodes

July 7, 2025 • 17 mins
Apple may use ChatGPT or Claude to power its smarter Siri Explainer: Will the EU delay enforcing its AI Act? Anthropic's Revenue Hits $4 Billion Amid High AI Salaries and Inflation Challenges Perplexity launches a $200 monthly subscription plan xAI Secures $10 Billion for Grok AI Development Amid Controversy and Competition OpenAI CEO Criticizes Meta's Recruiting Tactics, Highlights Cultural and Mission Differences Meta reportedly hires four more researchers from OpenAI There Are No New Ideas in AI - Only New Datasets #AI, #Apple, #ChatGPT, #Claude, #Siri, #Anthropic, #OpenAI
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Innovation Pulse, your quick, no-nonsense update on the latest in AI.

(00:09):
First, we will cover the latest news.
Apple partners with AI firms to enhance Siri, the EU's AI Act faces delays,
and Meta's hiring spree from OpenAI intensifies AI competition.
After this, we'll dive deep into Yaakov Lasker's insights on how data,
not algorithms, is driving AI breakthroughs.

(00:33):
Apple is reconsidering its approach to Siri's AI capabilities after several setbacks with its in-house models.
The company is in discussions with Anthropik and OpenAI to potentially use their AI models.
Tests have shown Anthropik's clawed as a strong candidate leading to talks about a partnership.

(00:54):
However, Anthropik's high pricing is causing Apple to also consider OpenAI and other options.
Apple aims to maintain user privacy by running AI on its own servers,
using models customized for its infrastructure.
Despite this, privacy remains a concern with outsourcing.

(01:17):
This shift would mark a significant change for Apple, a leader in tech innovation,
as they strive to keep up with competitors like Samsung, who are advancing in AI.
While no decisions have been made, Apple's exploration of third-party AI
reflects its urgent need to enhance Siri's intelligence and keep pace with industry trends.

(01:43):
With parts of the EU's AI Act set to take effect on August 2nd,
there's growing pressure to delay it.
Companies like Alphabet and Meta, along with European firms such as Mistrol and ASML,
are urging the European Commission to pause the Act for several years.
They argue that enforcing the Act means extra compliance costs and confusion due to the absence

(02:07):
of clear guidelines. The AI Code of Practice intended to help developers comply missed its
publication deadline of May 2nd. Some politicians and tech leaders are concerned that the Act
could hinder innovation, particularly in Europe, where companies have smaller compliance teams.
A group of 45 European companies called for a two-year delay on key obligations.

(02:33):
The European AI Board is considering implementing the Code of Practice by the end of 2025.
Let's now turn our attention to the AI industry's salaries.
Anthropic, an AI firm, has reportedly reached $4 billion in yearly revenue.

(02:54):
Recent developments include Enesphere hiring two key leaders from Anthropic's coding product,
Claude Code. Enesphere, which relies on Anthropic's AI for its cursor app,
appointed Boris Cherny and Cat Wu to lead their engineering and product teams.
The AI industry is seeing high salaries to retain talent, with companies like Open AI and

(03:16):
Anthropic paying between $200,000 and $690,000 for technical staff. Meanwhile, Meta's investment
in Scale AI reflects the competitive environment. In other AI news, Google faces challenges as AI
chatbots gain search market share. Open AI's chat GPT dominates with 80.1% of the

(03:40):
generative AI market, while Google holds 5.6%. The question remains whether Google can adapt
to these changes or risk becoming obsolete like Yahoo.
Perplexity has introduced a $200 per month subscription called Perplexity Max,
providing unlimited access to its labs tool and early access to features like the upcoming

(04:06):
AI-powered browser, Comet. Max subscribers will also get priority use of the latest AI models.
This move makes Perplexity the latest AI firm to offer a premium tier following Open AI,
Google, Anthropic, and Cursor. Besides the Max plan, Perplexity offers a $20 a month pro plan

(04:28):
and a $40 a month enterprise pro plan. In 2024, Perplexity reported $34 million in revenue,
but also significant expenses, mainly on cloud servers and AI model access. Its annual recurring
revenue reached $80 million in January. Despite growth, Perplexity faces stiff competition from

(04:50):
Google and Open AI in the AI search market. The new Max subscription could enhance its competitive edge.
Elon Musk's AI startup, XAI, recently raised $10 billion in funding, split between secured
notes, term loans, and strategic equity investments. This capital will bolster XAI's infrastructure

(05:14):
and the development of its grok AI chatbot, competing with Open AI and others like Anthropic.
XAI has already set up 200,000 GPUs at its Colossus Supercomputer in Memphis,
with plans for a million GPU facility. The funds will support building one of the world's largest

(05:35):
data centers and enhancing the grok platform. XAI acquired X, formerly Twitter, integrating
grok with the social media platform. The AI sector is increasingly competitive,
with companies like Open AI and Anthropic securing significant funding. Musk describes
grok as maximally truth-seeking and anti-woak, though it has stirred controversy with its responses.

(06:01):
Musk has publicly criticized Open AI's shift from non-profit goals to commercial focus,
leading to tensions with its CEO, Sam Altman.
Join us as we discover the dynamics of AI recruitment. Open AI's CEO expressed concerns
over Meta's recruiting spree, suggesting it could lead to cultural issues. He noted the tech industry

(06:26):
has evolved from niche to mainstream, with AI discussions becoming intense. Recent hires by
Meta, including former Open AI staff, prompted a response. Open AI's chief research officer
likened it to a home invasion. The CEO acknowledged Meta's talent acquisition,
but stressed Open AI's mission-driven culture. He emphasized the value of Open AI stock and

(06:53):
foresaw cultural challenges for Meta. Despite industry changes, he expressed confidence in
Open AI's research direction, focusing on ethical AGI development. He praised Open AI's
unique culture, describing it as a hub of innovation. He asserted that, while other
companies might shift focus, Open AI remains committed to its goals. Employees echoed this

(07:19):
sentiment, highlighting Open AI's distinctive and creative environment.
Meta has been actively recruiting talent from Open AI, hiring notable researchers like Trappit
Bansal and others, as reported by TechCrunch and The Wall Street Journal. The information now reveals

(07:40):
four more hires, Sheng Jiajiao, Jiahui Yu, Xu Chou Bi, and Hong Yu Ren. This follows the launch
of Meta's Lama for AI models, which didn't meet CEO Mark Zuckerberg's expectations and faced criticism
over benchmark performance. Open AI CEO Sam Altman suggested Meta offered $100 million signing

(08:04):
bonuses, but none of their top talent has left. Meta's CTO, Andrew Bosworth, explained to employees
that while senior leaders might receive significant offers, the terms are more complex than just a
one-time bonus. The competitive hiring underscores the ongoing rivalry between Open AI and Meta

(08:25):
in the AI field. And now, pivot our discussion towards the main AI topic.
Hey everyone, I'm Alex, and this is Innovation Pulse, where we dig into the ideas reshaping our
world. Today, I've got Yakov Lasker with me, someone who's been tracking AI developments for years

(08:51):
and has some pretty mind-bending insights about what's really driving this whole revolution.
Thanks for having me, Alex. And here's something that's going to flip your understanding of AI on
its head. What if I told you that every major breakthrough in artificial intelligence over
the past 15 years wasn't actually about brilliant new algorithms or revolutionary math?
Wait, what do you mean? I thought AI was all about these genius researchers at MIT and Google

(09:16):
coming up with increasingly clever ways to... That's exactly what everyone thinks. But here's the
kicker. Most of the core ideas powering today's AI have been sitting around since the 1990s,
some even earlier. The real game changer? It's been about unlocking massive new sources of data
that we could never access before. Okay, now you've got my attention. Because if that's true,

(09:40):
it changes everything about how we should be thinking about where AI is headed next.
Exactly. And it explains why some people are saying we're hitting a wall right now,
while others are predicting the next massive leap forward.
All right, let's dive into this. Because I'm thinking about all those headlines about GPT-4
and reasoning models, and you're telling me the secret source isn't the fancy neural networks.

(10:04):
Think about it like this. Imagine you're a detective and you've had the same investigation
tools for decades. Fingerprinting, DNA analysis, witness interviews. But suddenly, you get access
to everyone's cell phone location data, then security cameras from every building,
then social media posts. You haven't invented better detective work, but you're solving power

(10:28):
just exploded. Oh, so the tools of AI, the basic learning algorithms, they were already there,
but we kept finding new crime scenes to investigate. Perfect analogy. And here's where it gets really
interesting. We can actually trace four massive breakthroughs in AI. And each one corresponds
to unlocking a completely new data source. Let me walk you through them. I'm ready. Start from the

(10:53):
beginning. 2012. Deep neural networks finally take off when AlexNet wins this huge image recognition
competition. But here's the thing. Neural networks weren't new. What was new was ImageNet.
This massive database of labeled images that suddenly gave us something substantial to train on.
So it's like having a gym membership your whole life, but finally getting access to actual weights

(11:17):
to lift? Exactly. Then 2017 rolls around and Google releases this paper called Attention is All You
Need introduces Transformers. Everyone thinks this is the revolutionary architecture that made
language models possible. But you're about to tell me Transformers weren't the real breakthrough
either. The real breakthrough was that Transformers finally let us train on the entire internet.

(11:39):
Like literally scraping and processing virtually every web page, every book, every article that
exists digitally. We didn't just get better at language. We got access to all of human written
knowledge. Hold up. That's actually wild when you put it that way. We went from training on
small curated data sets to everything. Everything. And this pattern keeps repeating. 2022. Open AI

(12:06):
releases chat at GPT. And everyone's amazed by how human like it sounds. The secret,
RLHF, reinforcement learning from human feedback, which sounds incredibly technical and advanced.
But reinforcement learning has been around since the early 1990s. What was new was using human
trainers to label what good responses look like. We unlocked humans as a data source for teaching

(12:31):
AI how to sound natural and helpful. Wait, I think I see where this is going. What's the fourth
breakthrough 2024 reasoning models like Open AI is 01. Everyone's talking about how AI can now
think and solve complex problems. But again, the underlying techniques aren't new.
What's new is using verifiers, calculators, compilers, logic checkers as a data source.

(12:55):
So instead of just learning from text or human feedback, these models can now learn from whether
their answers are actually correct. Precisely. And here's what's fascinating. Each time we unlock
a new data source, we go through this frenzied period where everyone's racing to collect more of that
type of data and squeeze every drop of improvement out of it. But here's what's starting to worry

(13:16):
me about this whole pattern. Are we running out of new data sources to unlock? That's the million
dollar question. Some researchers are arguing we've basically scraped the entire internet for text.
We're hitting limits on how much human feedback we can practically collect. So where do we go next?
And this is where it gets wild, right? Because if this pattern holds, the next breakthrough isn't

(13:39):
going to come from someone inventing a fancier algorithm. It's going to come from whoever
figures out how to tap into a massive new data source that we haven't properly harnessed yet.
And there are some pretty obvious candidates. I'm thinking video. YouTube uploads like
500 hours of content every minute. Exactly. That's orders of magnitude more data than all the text

(14:00):
on the internet. And video isn't just words. It's physics, body language, cultural context,
real world interactions. If someone cracks the code on efficiently training models on video data,
that would be like going from reading about riding a bike to actually watching millions of
hours of people riding bikes, crashing, learning, adapting. And Google owns YouTube, so they're

(14:24):
sitting on this gold mine. But here's another fascinating possibility. Robots. Right now,
we can't efficiently process all the sensor data from cameras and movement in a way that's useful
for training large models. But if we could figure out how to learn from robot sensors, we'd be
unlocking the physical world as a data source. Every robot interaction, every manipulation task,

(14:45):
every navigation challenge becomes training data for understanding how the real world actually works.
Okay, this is blowing my mind. But I have to ask, if it's really all about data, why are 95% of AI
researchers working on new methods instead of new data sources? That's the beautiful irony.
There's this famous essay called the bitter lesson that basically argues,

(15:09):
stop trying to be clever with algorithms, just scale up with more data and compute. But most
researchers are still chasing the clever algorithm breakthrough. It's like everyone's trying to
build a better fishing rod when the real opportunity is finding new lakes to fish in.
Perfect analogy. And here's what makes this really practical for anyone listening.
If you're thinking about investing in AI companies, building AI products, or even just

(15:32):
planning your career, the companies that win might not be the ones with the smartest algorithms.
They'll be the ones with access to the most valuable data sources that nobody else can tap into.
Exactly. Think about it. Google has YouTube and search. Tesla has millions of cars collecting
real world driving data. Meta has social interactions. The question is, what unique data

(15:55):
source exists that nobody's figured out how to use yet? So next time someone shows you a shiny new
AI model, don't just ask, how does it work? Ask, what data did it learn from that others
couldn't access? Because that data source might be the real competitive mode, not the algorithm
sitting on top of it. Yaakov, this has been absolutely fascinating. You've completely changed

(16:18):
how I think about AI progress. For everyone listening, remember, the next time you hear about an AI
breakthrough, dig deeper. The real innovation might not be in the code. It might be in the data.
Keep your eyes open for new data sources in your own field. The next AI revolution might just be
waiting for someone to figure out how to learn from information that's been sitting right in front

(16:42):
of us all along. Thanks for joining us on Innovation Pulse. I'm Alex, and until next time, keep
questioning the story everyone else is telling. And remember, sometimes the most profound insights
come from looking at what everyone else is overlooking. That's a wrap for today's podcast.

(17:03):
Apple is teaming up with AI firms to boost Siri while ensuring privacy and Meta's recruitment
from OpenAI sparks industry rivalry, while Yaakov Lasker suggests that future AI breakthroughs
hinge on accessing new data sources rather than inventing new algorithms.
Don't forget to like, subscribe, and share this episode with your friends and colleagues

(17:26):
so they can also stay updated on the latest news and gain powerful insights. Stay tuned for more updates.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.