All Episodes

June 12, 2025 20 mins
OpenAI claims to have hit $10B in annual revenue OpenAI Slashes Prices for o3 Model, Boosting Accessibility for Developers and Startups OpenAI's o3-pro AI Model: Advanced Reasoning for ChatGPT Pro and Team Users OpenAI's open model is delayed Google Introduces "Scheduled Actions" for Gemini Assistant with AI Pro and Ultra Plans Google's Glass Gambit #AI, #OpenAI, #Google, #ChatGPT, #o3pro, #GeminiAssistant, #TechNews
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Innovation Pulse, your quick, no-nonsense update on the latest in AI.

(00:09):
First, we will cover the latest news.
Open AI reports soaring revenues and cuts prices for its O3 model, while Google enhances
its Gemini Assistant, and Open AI delays a new open model release.
After this, we'll dive deep into Google's strategic return to the smart glasses market

(00:30):
and how its Gemini AI system is set to redefine AI integration in wearables.
Open AI reports reaching $10 billion in annual recurring revenue, up from around $5,500 million
last year.
This includes income from consumer products, chatGPT business products, and its API.

(00:53):
The company serves over 500 million weekly active users and 3 million paying business
customers.
The revenue milestone comes about two and a half years after launching chatGPT.
Open AI aims for $125 billion in revenue by 2029.

(01:14):
The company faces pressure to boost revenue quickly as it spends billions annually on
hiring talent and securing infrastructure for its AI systems.
However, Open AI has not disclosed its operating expenses or whether it is near profitability.
Open AI has announced an 80% price cut for its advanced reasoning language model O3,

(01:40):
reducing costs to $2 per million input tokens and $8 per million output tokens.
This significant reduction, highlighted by CEO Sam Altman, aims to encourage broader
experimentation and make the model more accessible.
The update positions open AI competitively against other models like Google's.

(02:04):
Gemini 2.5 Pro and Anthropics clawed Opus 4.
As AI providers compete on performance and affordability, this move could benefit startups
and developers by reducing financial barriers.
The O3 model, now available via Open AI's API, offers flexible pricing options, including

(02:24):
a flex mode for synchronous processing.
This changes part of a broader trend toward making advanced AI capabilities more affordable
and accessible to a wider range of users.
Join us as we discuss the impact of Open AI's O3 Pro.
Open AI has introduced O3 Pro, a highly capable AI model that advances their O3 reasoning

(02:50):
model.
Unlike typical AI, reasoning models solve problems step by step, enhancing reliability
in fields like physics and coding.
Available for chat GPT Pro and team users, O3 Pro replaces the O1 Pro model with wider
access coming soon.

(03:10):
It's also integrated into Open AI's developer API, priced at $20 per million input tokens
and $80 per million output tokens.
Expert reviews favor O3 Pro for clarity and accuracy in various domains.
It boasts features like web searching and file analysis, but has longer response times

(03:33):
and certain limitations, such as the inability to generate images.
Despite these, O3 Pro excels in AI benchmarks, outperforming competitors like Google's Gemini,
2.5 Pro, and Anthropics, Claude Four, Opus in math and science evaluations.

(03:55):
Open AI's CEO, Sam Altman, announced a delay in the release of their new open model, now
expected later this summer after June.
Altman shared that their research team made unexpected advancements, making the wait worthwhile.
Originally aimed for early summer, the model will have reasoning capabilities similar to

(04:17):
Open AI's existing models.
Open AI hopes to surpass other models, like DeepSeek's R1.
The AI field has grown competitive, with Mastral and Quen releasing their own advanced models.
Open AI is exploring enhanced features, possibly connecting their model to cloud-hosted AI for
complex tasks, though it's uncertain if these will be included.

(04:42):
This release is crucial for Open AI's reputation with researchers and developers.
As Altman acknowledges, the company has been criticized for not open sourcing its models,
aiming to change that perception with this release.
Google is enhancing its Gemini Assistant with a new feature called Scheduled Actions.

(05:04):
This allows AI Pro and AI Ultra subscribers to instruct the assistant to perform tasks
at specific times.
Users can request daily calendar summaries or have blog post ideas generated every Monday.
Additionally, Gemini can handle one-off tasks, such as summarizing an event after it occurs.

(05:26):
To use this, subscribers can navigate to the Scheduled Actions page in the Gemini app settings.
This feature was initially spotted in April, aligning with Google's goal to have Gemini
perform more assistant-like tasks.
Comparable to Open AI's chatGPT, which offers reminders and recurring actions, Gemini aims

(05:48):
to streamline and automate routine tasks for its users, making it a more efficient digital
assistant.
And now, pivot our discussion towards the main AI topic.
Hello everybody.
I'm Jakob Lasker, and today we're diving deep into Google's ambitious return to the

(06:13):
Smart Glasses Arena and why they believe this represents nothing less than the next frontier
for artificial intelligence.
Let me paint you a picture of what Google envisions for our near future.
Imagine you're reading a book and your glasses can instantly summarize the page you're looking
at.
You're watching a YouTube video and your eyewear identifies the location being shown.

(06:38):
This isn't science fiction anymore.
Google has demonstrated exactly these capabilities using glasses powered by their Gemini AI system,
and it represents a fundamental shift in how we might interact with information in the
coming years.
But to understand why this matters so much, we need to step back and examine the broader

(06:59):
context.
Google isn't alone in this vision.
Apple, Meta, and Google have all identified smart glasses as the next major computing
platform, even though current sales remain a fraction of smartphone volumes.
The question isn't whether this technology will eventually succeed, but rather who will
define how we experience it.

(07:21):
Google's approach through their new Android XR platform reveals a sophisticated understanding
of consumer behavior and technological limitations.
Unlike some competitors who position their devices as smartphone replacements, Google
sees smart glasses as smartphone augmenters.
Sharam Izzadi, Google's VP and GM for Android XR, puts it perfectly when he describes this

(07:46):
as a growing ecosystem approach.
This perspective acknowledges a crucial truth that many tech companies miss.
People aren't ready to abandon their phones, but they are ready to enhance how they use
them.
This strategy makes perfect sense when you consider the fundamental advantages that glasses

(08:06):
offer as an AI platform.
Think about it from the perspective of context and convenience.
Your smartphone knows a lot about you through apps and location data, but it can't see
what you're seeing.
Smart glasses equipped with cameras and AI can understand your environment in real time,
providing assistance that feels truly contextual rather than reactive.

(08:30):
Add to this the hands-free interaction model, and you have a platform where AI assistance
can provide information and assistance without interrupting your natural workflow.
Google's roadmap reveals a carefully orchestrated multi-pronged approach that shows they've
learned from their earlier missteps with Google Glass in 2013.

(08:51):
They're not putting all their eggs in one basket this time.
Instead, they're developing three distinct product categories, each serving different
use cases and market segments.
The first category centres around Project Muhan, a mixed reality headset developed in
partnership with Samsung.
This device, launching later this year, serves as Google's answer to Apple's Vision Pro.

(09:17):
It runs Android XR, supports both 2D Google Play apps and 3D experiences, and integrates
Gemini Live AI for screen recognition and interaction.
While this headset will likely carry a premium price tag similar to the Vision Pro's $3,500
cost, it establishes Google's technical capabilities and provides a platform for developers to

(09:42):
begin creating XR experiences.
The second category involves Project Aura, developed with Xreel, which represents Google's
developer-focused entry into true AR glasses.
These tethered glasses offer a 70-degree field of view and include dual cameras for
motion and hand tracking.

(10:04):
What's particularly interesting about this approach is how it bridges current technology
limitations with future ambitions.
Rather than attempting to cram all processing power into lightweight frames, these glasses
connect to external processing units similar to how early smartphones relied on tethered
connections to more powerful computers.

(10:26):
The third and perhaps most commercially significant category consists of AI-enabled glasses, developed
with established eyewear partners, including Warby Parker, Gentle Monster, and Carrying
Eyewear.
These products directly compete with Meta's Ray-Ban smart glasses, but leverage Google's
deeper integration with Android smartphones and Google services.

(10:50):
The timing here is crucial.
While Project Aura serves developers and Project Muhan targets early adopters, these AI glasses
aim for mainstream consumers, potentially launching in 2026.
The genius of Google's partnership strategy becomes clear when you consider the practical

(11:11):
challenges of eyewear adoption.
Unlike smartphones or headphones, glasses require precise fitting, prescription customization,
and often in-person consultation.
By partnering with established eyewear retailers and manufacturers, Google sidesteps the massive
infrastructure investment required to distribute glasses effectively.

(11:33):
This approach allows them to focus on the technology while leveraging existing expertise
in frame design, lens manufacturing, and retail distribution.
But the real story here isn't about hardware specifications or partnership announcements.
It's about Google's vision for how AI will integrate into our daily lives through wearable

(11:54):
computing.
Every product in their lineup shares one crucial component, Gemini AI.
This isn't coincidental.
Google is betting that AI capabilities will drive smart glasses adoption, more than augmented
reality features or display technology.
Consider how this differs from previous approaches to smart glasses.

(12:19):
Earlier attempts focused heavily on display technology and augmented reality overlays.
Google Glass emphasized heads-up display functionality.
Magic Leap promised immersive mixed reality experiences.
These approaches required consumers to adapt to new ways of consuming information.

(12:41):
Google's current strategy flips this equation by making AI assistance the primary value
proposition with display capabilities serving as enhancement rather than the main attraction.
This AI first approach addresses one of the fundamental challenges that has plagued smart
glasses for over a decade, the value proposition gap.

(13:04):
Previous generations of smart glasses required users to change their behavior significantly
to justify wearing an additional device.
Google's AI enabled glasses promised to enhance existing behaviors rather than replace them.
You still read books, watch videos and navigate the world exactly as before.

(13:26):
But now you have an intelligent assistant that can see what you see and provide contextual
assistance.
The competitive implications of this strategy extend far beyond the smart glasses market.
Google's Android XR platform positions the company to influence how AI assistance integrate
with wearable devices across the entire ecosystem.

(13:50):
Just as Android established Google's influence in mobile computing, Android XR could determine
how AI assistance evolve in the wearable computing era.
Meta's current lead with Ray-Ban's smart glasses provides an interesting counterpoint
to Google's approach.
Meta has focused on social features, media capture and basic AI interactions, largely

(14:14):
treating their glasses as accessories to their broader social media ecosystem.
Google's integration with Android smartphones, Google services and enterprise applications
suggests they're aiming for deeper utility rather than social engagement.
Apple's rumored entry into smart glasses creates another fascinating dynamic.

(14:37):
Apple's traditional approach emphasizes premium hardware experiences and tight ecosystem
integration.
If Apple applies their iPhone strategy to smart glasses, we might see expensive, beautifully
designed devices that prioritize user experience over broad market adoption.
Google's multi-tier approach with varying price points and capabilities could provide

(15:02):
broader market coverage while Apple focuses on the premium segment.
The technological challenges facing all competitors remain substantial.
Battery life, processing power, heat management and social acceptance continue to limit what's
possible with current smart glasses.
Google's tethered approach with Project Aura acknowledges these limitations while providing

(15:27):
a development platform for future untethered devices.
Their AI-first strategy with partner glasses sidesteps many hardware limitations by leveraging
smartphone processing power and connectivity.
Looking ahead, the success of Google's smart glasses initiative will depend heavily on
their ability to demonstrate clear utility rather than just technological novelty.

(15:52):
The AI capabilities they've demonstrated, such as real-time translation, visual description
and contextual information retrieval, suggest practical applications that could justify
adoption for specific use cases.
Accessibility represents another crucial dimension of this story.
Google has emphasized assistive features for users with hearing or vision impairments,

(16:16):
following successful examples like Apple's AirPods Pro hearing aid functionality.
Smart glasses with AI assistance could provide transformative benefits for users who need
additional support navigating the world, potentially creating a compelling early adopter segment
that values utility over novelty.

(16:37):
The business model implications of AI-enabled smart glasses also deserve consideration.
Unlike smartphones, which generate revenue through device sales and app store commissions,
smart glasses could enable new forms of contextual advertising and information services.
Google's advertising expertise and vast content ecosystem position them uniquely to monetize

(17:01):
smart glasses through services rather than just hardware sales.
However, privacy concerns around always-on cameras and AI analysis could significantly
impact adoption rates.
Google's history with data collection and advertising, while providing technical advantages,
may create consumer hesitancy around wearing Google-enabled cameras constantly.

(17:27):
The company will need to address these concerns transparently to achieve mainstream adoption.
The developer ecosystem will play a crucial role in determining whether Google's Android
XR platform gains traction.
Google's decision to connect Android XR with the existing Google Play Store provides
a significant advantage, giving developers familiar tools and distribution channels.

(17:51):
However, creating compelling experiences for smart glasses requires different design principles
than smartphone apps, and the learning curve could slow ecosystem development.
As we look toward the next few years, Google's smart glasses strategy represents a sophisticated
attempt to position AI as the killer application for wearable computing.

(18:14):
By focusing on practical utility rather than flashy augmented reality features, partnering
with established eyewear companies rather than building retail infrastructure and creating
multiple-product tiers rather than betting on a single device, Google has crafted an
approach that acknowledges both the promise and the practical challenges of smart glasses.

(18:36):
The ultimate success of this initiative will depend on execution, consumer acceptance,
and competitive response.
But Google's return to smart glasses with a clear AI-focused strategy and realistic
timeline suggests they've learned from both their own earlier attempts and the broader
industry's struggles with wearable computing adoption.

(18:59):
Whether smart glasses become the next major computing platform or remain a niche accessory
will likely be determined in the next three to five years.
Google's comprehensive approach gives them a strong position in this emerging market,
but the competition is just getting started.
The next frontier for AI might indeed be on our faces, but the battle for that territory

(19:24):
is far from decided.
That's our deep dive for today.
The convergence of AI capabilities, improved hardware, and strategic partnerships is creating
new possibilities for how we interact with information and technology.
Google's smart glasses Gambit represents one of the most significant bets in this space,

(19:45):
and its success or failure will shape the future of wearable computing for years to come.
And that's a wrap for today's podcast.
Open AI is doubling its revenue with strategic price cuts and new offerings, while Google
is re-entering the smart glasses market with AI-powered enhancements to improve everyday tasks.

(20:10):
Don't forget to like, subscribe, and share this episode with your friends and colleagues,
so they can also stay updated on the latest news and gain powerful insights.
Stay tuned for more updates.
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.