All Episodes

June 23, 2025 12 mins
In this episode, we begin with an overview of today's topics, followed by an exploration of OpenAI's significant contract with the Pentagon, highlighting the role of AI in government operations. Sam Altman shares insights on the monetization strategies for ChatGPT, providing a glimpse into OpenAI's financial approach. We discuss the organization's search for an energy policy expert, reflecting its focus on sustainability. Changes in OpenAI's insider risk management are examined, emphasizing the importance of security. Lastly, we delve into the trademark dispute between OpenAI and Jony Ive's "io," offering a look into the complexities of branding within the tech industry.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Ever wonder how artificial intelligence isreshaping national security?

(00:04):
Welcome to The OpenAI Daily Brief, your go-tofor the latest artificial intelligence updates.
Today is Monday, June 23, 2025.
Here’s what you need to know about OpenAI'sgroundbreaking move into the government sector.
Let’s dive in.
OpenAI has just landed a massive two hundredmillion dollar contract with the Pentagon to

(00:28):
develop advanced artificial intelligence toolsfor the United States Department of Defense.
This deal marks the launch of OpenAI forGovernment, a new initiative aimed at bringing
the company's cutting-edge technology tofederal, state, and local governments.
The centerpiece of this initiative is theChatGPT Gov platform, specifically designed for

(00:49):
government use.
Imagine a world where government agencies canstreamline processes, enhance cybersecurity,
and improve services for military personnel andtheir families using artificial intelligence.
That's the vision OpenAI is pursuing with thisnew venture.
The Department of Defense is eager to explorehow artificial intelligence can transform

(01:10):
everything from healthcare delivery to dataacquisition and cyber defense.
However, as exciting as this sounds, there arehurdles to overcome.
Dr.
Jim Purtilo, an expert in technology adoption,cautions that the federal government is a
complex bureaucracy, and adopting artificialintelligence on such a large scale is no easy

(01:33):
feat.
"You cannot just flip a switch and declarevictory, 'yeah, we're using artificial
intelligence!'" he warns.
The government will need to navigate numerousobstacles before fully integrating these new
technologies.
The rollout of OpenAI for Government willlikely include pilot programs to strategically
test and study the adoption of artificialintelligence in various government functions.

(01:58):
These programs will help identify challengesand costs, ensuring that stakeholders receive
the best value from these innovations.
While the public sector often benefits fromstreamlined processes, Dr.
Purtilo notes that federal regulations maylimit the full potential of artificial
intelligence by requiring many existingbureaucratic operations to remain intact.

(02:21):
This could restrict artificial intelligence'sability to revolutionize government operations,
but with careful planning, significantimprovements could still be achieved.
OpenAI's CEO, Sam Altman, recently shared histhoughts on the possibility of incorporating
ads into ChatGPT during a podcast discussion.

(02:41):
Altman isn't completely opposed to the idea.
In fact, he mentioned that he findsadvertisements on platforms like Instagram
"kinda cool" and has even purchased items fromthem.
However, he acknowledges that implementing adsin ChatGPT would require a lot of care to get
it right.
The challenge, Altman explains, is maintainingthe high level of trust that users currently

(03:04):
have in ChatGPT.
It's interesting, he points out, because whileartificial intelligence can be quite reliable,
it also has its quirks—like hallucinations,where the technology generates incorrect or
nonsensical information.
This means that introducing ads couldcomplicate things if not handled carefully.

(03:25):
Altman candidly admitted that users should notentirely trust ChatGPT, given its tendency to
hallucinate.
This presents a unique challenge foradvertisers, who would be hesitant to associate
their brands with potential misinformation.
After all, no company wants their ad to comeout "wonky donkey," as Altman humorously put
it.

(03:46):
Given the financial demands of maintaining andimproving AI technology, it's likely just a
matter of time before ads make their way intoChatGPT.
Altman speculates that initially, ads mightappear as a break in the text flow, with
clickable links or short videos, rather thanbeing generated by the AI itself.
However, with the rapid integration ofgenerative AI into various business sectors, we

(04:09):
might eventually see more sophisticated adformats emerging.
So, while the idea of ads in ChatGPT might seeminevitable, Altman emphasizes the importance of
introducing them thoughtfully.
It's about balancing the financialsustainability of the platform with maintaining
user trust and ensuring that the ads themselvesare both effective and non-intrusive.

(04:32):
As we watch this space, it'll be intriguing tosee how OpenAI navigates these waters.
OpenAI is on the hunt for an energy policylead, and this isn't just any ordinary job.
It's a role that could significantly influencehow the company powers its ambitious projects,
particularly its Stargate joint venture.

(04:53):
Stargate plans to spend a staggering fivehundred billion dollars on data centers in the
United States and beyond, and these centerswill need a massive amount of energy—think one
to two gigawatts for the larger campuses.
So, what's the big deal here?
Well, the energy policy lead will be based inWashington, D.C., and will focus on

(05:13):
collaborating with both federal and stategovernments.
Their mission?
To help power OpenAI's operations in the UnitedStates and shape a global policy strategy.
This means working closely with the Departmentof Energy, the Federal Energy Regulatory
Commission, and various regional grid operatorsand utilities.
It's a job that demands expertise and finesse.

(05:37):
OpenAI is looking for someone with a wealth ofexperience—ten to fifteen plus years in energy
policy, infrastructure, or sustainabilityroles.
They want someone who's been in the trenches,whether that's within the government,
utilities, energy firms, or mission-drivenorganizations.
The pay is nothing to sneeze at either, with asalary range of two hundred eighty to three

(05:59):
hundred twenty-five thousand dollars, plusequity.
Now, why does this matter?
Well, if you've ever tried to manage alarge-scale energy project, you'll know it's a
balancing act.
OpenAI's Stargate venture involves a mix ofcloud contracts, partnerships, and self-builds,
each requiring different levels of involvementin energy planning.

(06:21):
The energy policy lead will play a crucial rolein ensuring these data centers have the power
they need without compromising onsustainability or reliability.
This is about more than just keeping the lightson; it's about making sure that OpenAI's
operations are sustainable and efficient.
As the company pushes the boundaries ofartificial intelligence, having a robust energy

(06:44):
strategy will be vital.
It's fascinating to see how OpenAI is not onlyinnovating in the world of AI but also in how
they approach energy consumption andsustainability.
This role could be a game-changer, not just forOpenAI, but for the energy sector as a whole.
OpenAI, the world's most valuable pure-playartificial intelligence software company, has

(07:08):
made a surprising move by letting go of severalmembers from its insider risk team.
This team was crucial for safeguarding OpenAI'sintellectual property, especially the model
weights that are key to its competitive edge.
According to a report from The Information,OpenAI confirmed these changes, which align
with new U.S.

(07:29):
government regulations to prevent sensitiveartificial intelligence software details from
being exported to potentially hostile entities.
The insider risk team at OpenAI played apivotal role in ensuring that proprietary
information, such as the parameters and modelweights, remained protected from espionage or
unauthorized distribution.

(07:50):
These components are essential for thecompany's artificial intelligence models,
determining how they respond to queries andgiving OpenAI a competitive advantage in the
market.
Now, you might be wondering why such a crucialteam is being restructured.
Well, as OpenAI expands and the landscape ofartificial intelligence threats evolves, the

(08:11):
company feels the need to adapt its securitystrategies.
This shake-up comes after the BidenAdministration rolled out the AI Diffusion
Rules, which impose strict controls onexporting artificial intelligence model weights
and other sensitive data.
The rules also require that such data can onlybe stored outside the U.S.

(08:31):
under tight security conditions.
These new regulations are part of a broadereffort to curb the risk of artificial
intelligence model weights being exfiltrated bymalicious actors.
Once these weights are out, they can be copiedand distributed globally in an instant, posing
a significant security threat.
The U.S.
government has specifically pointed out risksinvolving companies from the People's Republic

(08:56):
of China, which have allegedly used foreignsubsidiaries to acquire integrated circuits
under export controls.
Given the high stakes, OpenAI's decision torevamp its insider risk team reflects a
strategic move to better align with these newregulations and the growing threat of corporate
espionage.
The company has been a prime target due to itshigh-profile contracts with the defense

(09:18):
department and its role in building sovereignartificial intelligence infrastructure both
domestically and internationally.
It's a fascinating development that highlightsthe challenges artificial intelligence
companies face in protecting their intellectualproperty while navigating an increasingly
complex regulatory environment.

(09:39):
As OpenAI continues to grow and innovate, it'llbe interesting to see how they manage these
risks and maintain their leadership in theartificial intelligence landscape.
OpenAI's latest venture with legendary designerJony Ive has hit a trademark snag, and it’s
causing quite a stir in the tech world.

Just imagine this (09:59):
OpenAI, known for its groundbreaking work in artificial intelligence,
teams up with Jony Ive, the mastermind behindsome of Apple’s most iconic designs.
They’re poised to launch an innovative hardwareline under the name "io," but suddenly, they've
been stopped in their tracks by a trademarkcomplaint.

(10:20):
So what's the story here?
Well, OpenAI recently took down a blog postannouncing their six and a half billion dollar
acquisition of Jony Ive’s hardware startup"io." This move came after a company called iyO
claimed that the "io" name infringes on theirtrademark.
iyO, which already sells an artificialintelligence-powered "audio computer," believes

(10:42):
it has the rights to the name.
This unexpected hurdle has left OpenAI and Ivescrambling to figure out their next steps.
Despite the legal wrangling, OpenAI has made itclear that their partnership with Ive is still
very much alive.
They’re actively exploring options to resolvethe name dispute.

(11:03):
This isn’t just about a name; it's about thevision for a new wave of artificial
intelligence hardware that could reshape themarket.
The stakes are high, and both companies areeager to push forward.
Now, if you’re wondering who iyO is, they’re acompany that markets an artificial intelligence
device they describe as an "audio computer,"essentially an artificial intelligence-powered

(11:26):
earbud.
It’s a bit like the now-defunct Humane AI Pin,allowing users to run natural language
applications on the go.
With a product already in the market, iyO isdetermined to protect its brand identity
fiercely.
As we wait to see how this trademark tangleunfolds, it's clear that OpenAI and Jony Ive's

(11:47):
collaboration is still a hot topic.
Whether they’ll be able to keep the "io" nameor be forced to rebrand remains to be seen.
It’s a reminder of how even the biggest namesin tech can face unexpected challenges.
That’s it for today’s OpenAI Daily Brief.
The ongoing saga of OpenAI and Jony Ive’shardware venture highlights the complex

(12:09):
interplay between innovation and intellectualproperty.
Thanks for tuning in—stay updated with us formore insights.
This is Bob, signing off.
Until next time.
Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.