Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Innovation Pulse, your quick, no-nonsense update on the latest in AI.
(00:09):
First, we will cover the latest news.
Walmart partners with OpenAI to personalize shopping, Nvidia debuts,
DGX Spark meta-recruits top AI talent, Amazon launches QuickSuite,
and OpenAI's Sora app tops charts.
After this, we'll dive deep into OpenAI's strategic evolution,
(00:31):
showcased at the 2025 Developer Day. Stay tuned.
Walmart has partnered with OpenAI to enable faster shopping through the chatGPT chatbot.
This move aims to adapt to changing consumer behaviors,
where shoppers increasingly use AI chatbots for gift ideas and finding deals.
(00:53):
Walmart CEO Doug McMillan stated that the new feature would transform the traditional
e-commerce experience, making it more multimedia, personalized, and contextual.
Although no launch date was provided, Walmart shares rose nearly 5% following the announcement.
OpenAI's instant checkout feature, initially supporting single item purchases from Etsy sellers,
(01:19):
will soon allow purchases through Walmart. OpenAI plans to charge companies for transactions,
completed via chatGPT. Apart from this partnership, Walmart also features an AI
shopping assistant named Sparky on its app.
The DGX Spark is an arm-based system using Nvidia's DGXOS, a specialized Ubuntu Linux
(01:45):
for GPU tasks. It includes Nvidia's AI software, QtA libraries, and NIM microservices.
Starting at $3,999, it offers a more affordable alternative to high-end GPUs,
like the RTX Pro 6000 and AI Server GPUs, despite being less powerful.
(02:07):
The Spark's GB10 chip matches the performance of an RTX 5,070,
but surpasses its 12GB video memory with 128GB of unified memory,
allowing it to run larger AI models. For instance, it can handle models like
OpenAI's recent GPT-OSS, requiring up to 80GB of memory.
(02:31):
Nvidia's CEO Jensen Huang highlighted the Spark launch by delivering a unit to Elon Musk,
reminiscent of a 2016 delivery to OpenAI, marking a new chapter in AI supercomputers.
Join us as we discuss the AI talent acquisition at Meta. Andrew Tullock, co-founder of Miramarati's
(02:54):
Thinking Machines Lab, has joined Meta platforms after leaving the AI company.
A spokesperson confirmed his departure, citing personal reasons. Meta CEO Mark Zuckerberg had
previously tried to acquire Thinking Machines Lab, and after refusal, approached its employees,
including Tullock. The Wall Street Journal reported that Tullock was offered a package
(03:17):
potentially worth $1.5 billion over six years. This move is part of Zuckerberg's strategy
to strengthen Meta's AI capabilities by recruiting talent from competitors.
Meta is known for offering lucrative pay packages to attract top researchers.
This recruitment drive follows the underwhelming performance of Meta's Lama 4 model.
(03:43):
Meta's efforts include offering substantial bonuses, as noted by OpenAI CEO Sam Altman,
who mentioned Meta's offers of $100 million to entice talent.
Amazon has introduced the QuickSuite, an AI-driven enterprise solution designed to
streamline access to critical data spread across various applications. This tool uses natural
(04:09):
language to help users find information and perform tasks quickly. QuickSuite acts as a
central hub, integrating with systems like Google Drive, Office 365, Salesforce, and more.
Users can create personalized agents, ask questions, and generate detailed reports,
(04:29):
improving productivity and data management. The tool offers features like Quick Research,
which accumulates information from different sources, and Quick Automate, allowing users
to create workflows and automate complex processes. Amazon's Finance team, for instance,
uses it to reconcile invoices. QuickSuite provides a comprehensive range of integrations,
(04:55):
surpassing other AI tools, like ChatGPT in its connectivity options. AWS offers a 30-day free
trial for users to explore QuickSuite's capabilities. For now, let's focus on Sora's unprecedented
growth rate. OpenAI's video-generating app, Sora, quickly rose to the top of the United
(05:20):
States app store, surpassing the iOS launch week of ChatGPT. App figure's reports, Sora achieved
627,000 downloads in its first week, compared to ChatGPT's 606,000. Despite being invite-only
and available only in the United States and Canada, Sora reached a million downloads in
(05:42):
less than five days, according to OpenAI's Bill Peebles. This impressive performance came as Sora's
videos, powered by the Sora2 model, gained popularity on social media, allowing users to
create realistic deep fakes, including those of deceased individuals. By October 3rd,
Sora had become the number one app on the United States app store, outperforming other AI app launches.
(06:09):
Daily iOS downloads peaked at 107,800 on October 1st and remained strong, despite limited access.
ChatGPT users in the US can now make purchases directly within the chat using a new feature
called Instant Checkout. Available to ChatGPT free, plus and pro users, this feature allows
(06:33):
single item purchases by simply clicking a buy button, confirming order details,
and completing the transaction without leaving the chat interface. Payments can be saved for
future use by plus and pro members. Merchants pay a small fee per transaction, but the service is
free for users. OpenAI assures that product results remain organic and unsponsored, with factors like
(07:00):
price and availability affecting search outcomes. Instant Checkout is built on the
Agentech Commerce Protocol, developed with Stripe, which facilitates seamless transactions by allowing
merchants to manage customer interactions and payment processes. Expansion to multi-item purchases
(07:21):
and more regions is planned for the future. And now, pivot our discussion towards the main AI topic.
Welcome to Innovation Pulse. I'm Alex, and today we're going to explore OpenAI's 2025
(07:41):
Developer Day event, which marked a significant shift in how the company is positioning itself
in the AI ecosystem. With 800 million weekly ChatGPT users and a platform strategy that's
transforming how developers build and distribute AI applications, this event showcased OpenAI's
evolution from a model provider to something much more comprehensive. Joining me is Yakov Lasker,
(08:07):
who has been following OpenAI's developer strategy closely.
Thanks so much for having me, Alex. This was a fascinating event with a lot to unpack.
Please go ahead with your first question. Let's start with the basics. What was the scale of
this event compared to previous years? Dev Day 2025 was significantly larger than previous iterations.
They had over 1,500 developers attending in person at Fort Mason in San Francisco.
(08:32):
That's a 3.3x increase from the 450 attendees in 2024. The hybrid format included a live-streamed
opening keynote that drew over 22,000 concurrent viewers on YouTube and their website. Admission
cost $650 and was allocated through a lottery system, which tells you something about the demand.
The event featured CEO Sam Altman, head of developer experience Romain Hewitt, President
(08:58):
Greg Brockman, and notably included a closing fireside chat with Joni Ive, Apple's former
chief design officer who joined OpenAI after their $6.4 billion acquisition of his AI devices
start-up in May 2025. Those growth metrics are impressive. What about the platform itself?
How much has adoption grown? The numbers are staggering. Chat GPT usage surged
(09:21):
from 100 million weekly users in 2023 to 800 million in 2025, an 8x increase in just two years.
The API now processes 6 billion tokens per minute, up from 200 million in 2023 with multiple
developers, each processing over 1 trillion tokens. They've reached 92% of Fortune 500 companies,
(09:45):
though interestingly, Anthropik maintains a competitive edge with 32% enterprise LLM usage
versus OpenAI's 25%. The developer platform has grown to 4 million developers, which creates this
massive ecosystem they're now trying to systematize. The apps SDK seems to be getting a lot of
attention. What's the vision there? The apps SDK is arguably the most strategically
(10:10):
significant announcement because it's transforming Chat GPT into an actual app platform.
Built on the model context protocol as an open standard, it enables developers to build
interactive applications that run directly inside Chat GPT conversations. These apps can render
custom UI, access conversation context and memory, and share backend logic. The launch partners
(10:35):
demonstrated impressive capabilities. Spotify enables playlist creation within Chat,
Canva generates editable posters, Coursera recommends courses, Zillow provides interactive
real estate maps, and Expedia handles travel booking. The system uses natural language discovery,
(10:55):
so Chat GPT proactively suggests relevant apps for user tasks. They're even planning an e-commerce
protocol for instant checkout functionality directly in conversations. OpenAI frames this as
giving developers abnormal distribution opportunities, instant access to 800 million potential users.
(11:17):
Though this isn't OpenAI's first attempt at something like this, right? Exactly. This is
actually their third attempt at an app store concept, and that's a legitimate concern.
Chat GPT plugins launched in March 2023, followed by custom GPTs in November 2023,
and both saw limited adoption. Critics are questioning whether restrictive initial access,
(11:39):
which is currently limited to select partners like Canva, Coursera, Figma, Spotify, and Zillow,
and performance concerns will hamper success again. The SDK enters preview immediately with
these select partners, while public app submissions and app directory with review process and
monetization features arrive later in 2025. There's also the issue that apps running inside Chat GPT
(12:04):
need to perform as well as standalone interfaces, which is a technical challenge
that could make or break adoption. Let's talk about AgentKit. How does this fit into the broader
platform strategy? AgentKit represents OpenAI's comprehensive solution for the complete agent
development lifecycle. It bundles multiple components. AgentBuilder offers a visual drag
and drop canvas for creating agent workflows with branching logic, tool calls, and decision making.
(12:30):
ChatKit provides embeddable Chat UI for deploying conversational experiences in products and websites.
The connector registry centralizes MCP-based tool access with governance controls,
and they've enhanced the evals platform to support bring your own data sets, trace grading,
and automated prompt optimization. What's particularly bold is that the evals platform
(12:55):
now supports third-party model evaluations. Developers can assess models from Anthropic,
Google, and others within OpenAI's infrastructure. This signals extreme confidence that their models
will win head-to-head comparisons. The platform includes built-in observability for tracking
agent usage and a guardrail's SDK for safety screening in Python and TypeScript.
(13:20):
That's quite comprehensive. How quickly can developers actually build with this?
The speed is remarkable. During the keynote, OpenAI technical staff member Christina Huang
demonstrated building a complete two-agent workflow in just eight minutes,
a Dev Day schedule assistant called Frog that was published and made accessible at
(13:40):
openai.com Dev Day directory. This demonstrates how dramatically
agent development timelines have compressed from months to minutes.
AgentKit achieved general availability in preview with the connector registry rolling
out to enterprise and education customers with the Global Admin Console.
Sam Altman emphasized this throughout the event, saying,
(14:01):
it has never been faster to go from idea to product and
software used to take months or years to build. You saw that it can take minutes now.
Codex seems to be a major announcement as well. What's new there?
Codex graduated from research preview to full production release, which is significant.
It's powered by GPT-5 Codex, a specialized version of GPT-5 purposely trained for Codex and
(14:26):
agentic coding. The model demonstrates adaptive thinking that adjusts processing time based on
task complexity, can work on autonomous multi-hour tasks, and integrates tool calling with web
browsing across entire project contexts. As Alexander and Birikos described it,
the model can decide five minutes into a problem that it needs to spend another hour
(14:48):
based on complexity. New features include slack integration for delegating coding tasks via
Ed Codex mentions, in channels, a TypeScript Codex SDK for embedding capabilities in custom workflows,
and GitHub actions integration. Enterprise controls provide admin dashboards for monitoring
(15:08):
usage across CLI IDE and web interfaces, while tracking code review quality.
What kind of productivity gains are we talking about?
The metrics are impressive. Daily messages increased 10x since August 2025.
The system has served over 40 trillion tokens, and at OpenAI, nearly all engineers use Codex
with the resulting 70% increase in pull requests per week. External adoption shows similar impact.
(15:34):
Cisco reported 50% faster code review times after implementation.
Leading coding startups including Cursor, Windsurf, and Versal have adopted GPT-5 Pro for their
platforms. Codex comes included with Chat GPT Plus, Pro, Business, Education, and Enterprise Plans,
with metered API pricing launching October 20th, 2025. This solidifies OpenAI's position against
(16:01):
GitHub Co-Pilot and other AI coding tools, with the specialized GPT-5 Codex model providing coding
specific optimizations that general purpose models lack. Moving to the models themselves,
what's the story with GPT-5 Pro? GPT-5 Pro is OpenAI's most advanced reasoning model,
and it became available via API after previously being limited to Chat GPT subscribers.
(16:26):
It's designed for finance, legal, healthcare, and other domains requiring high accuracy,
deep reasoning, delivering PhD level reasoning across scientific domains with extended thinking
time for complex problems. The pricing reflects this premium positioning at $15 per million input
tokens and $120 per million output tokens, which is 10x more expensive than standard GPT-5 for input.
(16:51):
This represents OpenAI's premium offering for mission critical applications where accuracy
and deep reasoning are worth the cost. Multiple analysts called API Access to GPT-5 Pro the biggest
gift for developers from the event. And Sora2 is now available through API as well. Yes, Sora2,
their next generation video and audio generation model launched in both API and consumer interfaces,
(17:15):
including the Sora iOS app, Chat GPT, Sora.com, and Azure AI Foundry. The API offers two variants,
Sora2 optimized for speed at 720p for 0.10 per second, and Sora2 Pro delivering higher quality
at 1024p for 0.30 to 0.50 per second, depending on resolution. Key features include text to video
(17:40):
generation with adjustable resolution and duration, image to video creation, video remixing capabilities,
and rich soundscapes with ambient audio and synchronized effects, not just speech. The model
produces more realistic physically consistent scenes with greater creative control through
detailed camera direction, supports both landscape and portrait orientations, and generates clips up
(18:05):
to 12 seconds long. A partnership with Mattel demonstrated sketch to toy concept workflows
for product development. Current API limitations include no support for video input or image to
video of real people. They also introduced some cost optimized models, right? Exactly. Two new mini
(18:26):
models address cost sensitive use cases. GPT Real Time Mini provides the same voice quality and
expressiveness as the advanced voice model at 70% lower cost, enabling budget conscious
conversational AI applications with low latency audio interactions, MCP server integration,
image inputs, and SIP phone calling support. GPT Image One Mini delivers 80% cost savings over
(18:51):
the large image generation model at 2.0 per million text input tokens, 250 per million image input
tokens, and 8.0 per million image output tokens. This is ideal for mass visual content creation,
illustrations, and design prototyping. They also introduced 90% caching discounts for repeated
tokens with semantically similar prompts, which delivers major savings for customers, service,
(19:14):
knowledge bases, and fixed Q&A flows. Something surprising happened with open source models.
What's that about? The GPT OSS series represents OpenAI's first open-weight language model since
GPT2 in 2019, which is a significant strategic pivot. Released August 5, 2025, and highlighted at
Dev Day, the series includes two models, GPT OSS 120B with 117 billion total parameters,
(19:40):
activating 5.1B per token, and requiring 80 GB VRAM, and GPT OSS 20B with 21 billion parameters
activating 3.6B per token, and requiring 16 GB VRAM. Both use mixture of expert's architecture
with alternating dense and locally-banded sparse attention patterns, plus grouped multi-query
(20:00):
attention. Impressively, GPT OSS 120B achieves near parity with OpenAI's O4 Mini on reasoning tasks,
while GPT OSS 20B matches or exceeds O3 Mini on core benchmarks.
Licensed under Apache 2.0 for both commercial and research use, they're downloadable on
(20:21):
Huggingface and GitHub, run via LM Studio in Elama, and have cloud support from Amazon,
Bacetin, and Microsoft. Why would OpenAI release open models when they've been closed for years?
This competes directly with Meta's Lama, Mistral, and DeepSeq, while providing developers
flexibility without vendor lock-in. The models function as reasoning systems with variable
(20:44):
reasoning levels, raw chain of thought access, and tool calling, including web browsing and
Python execution within the reasoning process. By releasing capable open models, OpenAI aims to
capture the open-source developer ecosystem while maintaining its commercial API and platform
as the preferred production solution. They conducted comprehensive safety training with
(21:07):
adversarial fine-tuning under their preparedness framework, and external expert review before
release. It's a strategic bet that having open models won't cannibalize their paid services,
but will expand their influence in the developer community.
What about the developer tooling ecosystem beyond the major announcements?
(21:28):
OpenAI released multiple SDKs and frameworks. The agent's SDK provides a lightweight framework
for multi-agent workflows in both Python and TypeScript. It's a production-ready upgrade
from the Experimental Swarm project. Core primitives include agents with instructions and tools,
handoffs for delegation between specialized agents, guardrails for validation, automatic
(21:52):
session management, and built-in tracing. The framework is provider agnostic, supporting 100
plus LLMs beyond OpenAI, with temporal integration for durable long-running workflows,
and external tracing support for logfire, agent ops, brain trust, scorecard, and keywords AI.
(22:13):
ChatKit SDK offers a framework agnostic chat solution, with deep UI customization, response
streaming, tool visualization, rich interactive widgets, and session management. GitHub repositories
include starter templates and advanced samples for rapid development.
There were some major infrastructure announcements too. What's the hardware story?
(22:35):
On the morning of Dev Day, OpenAI announced deploying 6 gigawatts of AMD Instinct GPUs
over several years, with AMD receiving a warrant for up to 160 million shares,
with vesting tied to deployment milestones. AMD stock surged 30% plus on the announcement,
demonstrating the Midas Touch effect, where partnerships with OpenAI drive substantial
(22:59):
market value increases. Greg Brockman stated,
we need as much computing power as we can possibly get, comparing compute demand to
workforce needs. You can always get more out of more. Previous infrastructure partnerships
include 10 gigawatts of OpenAI designed AI accelerators from Broadcom, a $30 billion per
(23:19):
year capacity contract with Oracle, and continued NVIDIA collaboration. The hardware strategy also
hints at consumer devices through the Joni Ive collaboration with mentions of a
family of devices and edge on device AI capabilities. The keynote had some interesting
live demonstrations. What stood out? Romain Hewitt built a voice-controlled camera and lighting
(23:40):
system on stage, integrated an Xbox controller for light control, converted a whiteboard sketch
photo into a working mobile app screen, generated movie style credits with attendee
names and demonstrated apps, self-evolving through natural language commands, all without
writing a single line of code by hand. Sora 2 demonstrations showcased AI-generated videos,
(24:03):
including beach dogs and a kayaker in Wild River, both featuring synchronized ambient audio and
sound effects beyond speech. The venue included a Sora Cinema, a cozy mini theater with popcorn,
showing AI-generated short films, and an interactive Alan Turing living portrait
in a phone booth installation that speaks back. Custom arcade games built with GPT-5
(24:29):
featured ASCII art themes throughout the venue. One highlight was how OpenAI built Storyboard,
a custom creative tool for the film industry, in just 48 hours during an internal hackathon
using Codex. How did the developer community actually react to all this? The reception was
notably mixed, which is a departure from previous years' enthusiasm. Everett's assessment captured
(24:53):
the prevailing mood. Dev Day felt a little more like OpenAI doubling down on existing opportunities
than pushing the frontier of the future, noting it lacked the mind-blowing moments for developers
of Dev Day's past. The latent space podcast called it the best one yet in execution and confidence,
but acknowledged an overwhelming number of product names and concepts. The consensus positioned the
(25:16):
event as more exciting for AI operations professionals than pure developers. Those
building production systems appreciated AgentKits evaluation tools and observability features,
while developers seeking breakthrough capabilities felt underwhelmed. Within 48 hours,
207 engagement signals appeared across Hacker News and Reddit, with strong interest in production-ready,
(25:41):
multimodal AI, but concerns about the death of startups, as OpenAI now offers full stack
capabilities previously provided by third-party developer tools. How does this compare to previous
Dev Days? Dev Day 2023 was revolutionary. Hundreds of attendees witnessed GPT-4 Turbos launch with
128K context at 3X cheaper pricing. The Assistance API, Custom GPTs, and GPT Store announcement,
(26:10):
DALL E3 API, and multimodal capabilities. Reception was mind-blowing, though Sam Altman's
firing days later overshadowed the announcements. Dev Day 2024 was evolutionary. 450 attendees
experienced a deliberately subdued conference with real-time API, vision fine-tuning, prompt
(26:33):
caching, and model distillation. The tumultuous week featured executive departures, including CTO
Mira Muradi. Dev Day 2025 represents consolidation, won 500 attendees, witnessed platform ecosystem
announcements, rather than frontier model breakthroughs. As Prompt Hub Analysis noted, GPT-4
(26:54):
Turbo, with 128K context, was a massive launch at the time. Notably, it's the only new model ever
announced at a Dev Day event. Don't expect major model announcements at these events in the future.
So the whole approach has shifted. Absolutely. The philosophical shift tells the story.
2023 said, look at our amazing new capabilities. 2024 said, here are tools to build with what exists.
(27:20):
In 2025 declares, chat GPT is now a platform dire OS. Format Evolution shows transformation from
product launch spectacle to developer-focused intimacy to large-scale platform showcase.
Ben Thompson of Stratechry positioned open AI as the Windows of AI, praising the comprehensive
(27:41):
platform approach. VentureBeat declared, the era of simply asking AI questions is over,
and identified Codex GA as the most important announcement, the foundational layer upon which
other announcements were built. The verdict, Dev Day 2025 was the most polished, largest,
(28:03):
and most strategically coherent event, but lacked the breakthrough innovations that made 2023 feel
magical. What controversies or concerns emerged from the event? Multiple controversies surfaced
beyond general disappointment about incremental progress. The platform lock-in concern intensified
as open AI's expansion into orchestration, evaluation, and deployment tools increases
(28:25):
dependency. The more open AI owns the orchestration layer, the more power it wields. Security experts
highlighted MCP connector risks with InfoQ warning. Treat connectors as production integrations,
keep audit trails, and assume prompt injection will reach your edges. The expanded attack surface
from numerous third-party integrations requires defense and depth strategies, explicit consent
(28:50):
workflows, and least privilege access controls. The reliability question persists after major
outages in late 2024, though open AI's new service health dashboard and work toward 5.9s, 99.99999%.
Uptime from current three to four nines aims to address enterprise concerns.
(29:10):
What about the competitive landscape? How is this affecting other companies?
The developer tools competitive threat affects numerous startups, open AI now provides full
stack capabilities including prompts, evals, tracing, agent builder, and orchestration that
previously required third-party solutions. Companies like Zapier, N8n, MAKE, and specialized AI
(29:34):
operations platforms face existential pressure. However, Anthropik maintains an edge with 32%
enterprise LLM usage versus open AI's 25%, and Claw dominates coding with 42% market share compared
to open AI's 21%. Yet open AI's 800 million weekly users provide unmatched distribution
(29:55):
advantages that could tip enterprise decisions toward platform lock-in. Infrastructure partnership
announcements sent partner stock prices soaring 20% plus, demonstrating the market power of open AI
association. Open AI's bold move to support third-party model evaluations in its platform
signals confidence. We don't care which models you aval against, we're confident in ours.
(30:20):
What did Sam Altman have to say about open AI's priorities?
Sam Altman's comment that profitability is not in his top 10 concerns underscores the company
remains in space of investment and growth mode, prioritizing ecosystem dominance over near-term
margins. His quotes emphasized velocity and transformation throughout the event.
(30:41):
Beyond what I mentioned earlier, he said voice is going to become one of the primary ways that
people interact with AI. This aligns with their infrastructure scaling ambitions
and the Joanie Ive collaboration hinting at consumer hardware devices. The overall message was
about moving fast and establishing dominance in the platform layer before worrying about
(31:03):
monetization. What's the deeper criticism beyond just being underwhelmed?
Every two articulated the concern many developers felt, where is the vision and who is it for?
Dev Day felt like we were looking backwards at what's been done.
Compare that to Anthropics new thinking brand campaign, which seems to have a way to inspire
people from all walks of life. I yearn for inspiration, a picture of the dream they're
(31:27):
chasing. This critique suggests open AI successfully systematized AI deployment,
but lost the inspirational narrative that made earlier events transformative.
They've mastered the infrastructure for AI's future, but no longer surprise us with what AI can
do. It's the difference between building the roads versus discovering new destinations.
(31:50):
Both are valuable, but only one captures imagination. Looking at everything together,
what's your overall take on what this event represents? Open AI Dev Day 2025 marked the
company's transition from model provider to comprehensive platform ecosystem, with implications
extending beyond any single announcement. The strategic coherence impressed even skeptical
(32:12):
observers. The full stack approach from model APIs through development tools to end user
distribution creates powerful network effects. Yet the mixed reception reveals a fundamental
tension. Open AI has mastered systematizing AI deployment, but no longer surprises us with
what AI can do. The company is building infrastructure for AI's future, rather than pushing the frontier
(32:35):
of AI's capabilities. A mature strategy that wins markets, but inspires less wonder.
Whether this consolidation phase proceeds the next breakthrough or represents a permanent
shift from revolution to evolution, we'll define open AI's trajectory through 2026 and beyond.
As one analyst put it, they've become the windows of AI,
(32:58):
and that's both their greatest strength and their greatest limitation.
Yaakov, thanks so much for breaking all this down. It's clear that open AI is making some bold
strategic moves, even if they're not the headline grabbing breakthroughs we've seen before.
Thanks for having me, Alex. It's been a pleasure discussing what's shaping up to be a pivotal
moment in how AI gets built and deployed at scale.
(33:48):
Stay tuned for more updates.