All Episodes

May 8, 2025 • 20 mins
Pinterest Newsroom OpenAI Restructures as Public Benefit Corporation Amid Legal Dispute with Elon Musk Gemini 2.5 Pro Preview Released: Enhanced Coding for Interactive Web Apps Apple is looking to add AI search engines to Safari Hugging Face releases a free Operator-like agentic AI tool Why Companies Fear AI but Trust Cloud with Their Data #AI, #Gemini2.5Pro, #OpenAI, #CloudComputing, #Pinterest, #HuggingFace, #AppleAI
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Innovation Pulse, your quick, no-nonsense update on the latest in AI.

(00:09):
First, we will cover the latest news. Pinterest is enhancing its visual search for fashion.
Open AI plans a corporate restructure, Google unveils Gemini 2.5 Pro, and Apple explores
AI search integrations. After this, we'll dive deep into the paradox of enterprise technology

(00:30):
adoption and AI trust issues. Pinterest is enhancing its visual search platform,
allowing users to discover fashion content tailored to their tastes. By using images as
the starting point, the platform helps overcome the limitations of word-based searches.
New features include tools for interacting with image pins, enabling users to explore,

(00:54):
refine, and shop ideas that match their style. Initially available for women's fashion in
the United States, Canada, and the United Kingdom, these features will expand over time. Users can
identify and shop for outfit details, such as color or fit, with new tools like an animated

(01:15):
glow and a refinement bar. The platform's multimodal visual refinement technology, using
visual language models and AI, offers a rich search experience. This allows users to describe
their style ideas more accurately, making it easier to shop and explore. Open AI, supported

(01:37):
by Microsoft, and valued at $300 billion, announced plans to restructure into a public
benefit corporation while retaining nonprofit control. This decision, influenced by discussions
with California and Delaware Attorneys General, was confirmed by board chairman Brett Taylor.
The restructuring aims to harmonize the nonprofit's mission with commercial goals, allowing employees,

(02:04):
investors, and the nonprofit to own equity. CEO Sam Altman emphasized the unchanged mission
amidst a legal battle with co-founder Elon Musk, who opposes the shift. Musk's offer
to buy open AI for $97 billion, $400 million, was rejected. Concerns about losing nonprofit

(02:26):
governance led to a letter from ex-employees and civil society groups. Altman reiterated
the commitment to ensuring AI benefits humanity. The nonprofit will appoint directors for the
new corporation, maintaining mission focus. Join us as we discover the enhanced coding

(02:47):
capabilities. Today marks the early access release of Gemini 2.5 Pro Preview, an enhanced
version of the 2.5 Pro, designed for building interactive web apps. Originally set for launch
at Google i slash o, the update is now available due to high demand. This version boasts improved

(03:08):
coding capabilities, excelling in tasks like code transformation and complex workflows.
It surpasses its predecessor on the Web Dev Arena leaderboard by 147 elo points, reflecting
users' preference for its ability to create visually appealing and functional apps. Gemini
2.5 Pro maintains strong performance in multimodal tasks, achieving 84.8% on the VideoMME benchmark.

(03:37):
Users can access it through the Gemini API in Google AI Studio and Vertex AI, while users
can explore its features in the Gemini app. This release aims to empower users to effortlessly
code and create with just a single prompt. Apple is considering integrating AI search

(03:58):
engines from open AI perplexity and anthropic into Safari, as reported by Bloomberg. Eddie
Q, Apple's senior vice president of services, revealed this during his testimony in the
United States Justice Department's lawsuit against Alphabet. Q's testimony discussed
Apple's estimated $20 billion annual deal with Google, which makes Google the default

(04:21):
search engine on Safari. He noted a recent decline in Safari searches, attributing it
to the rise of AI usage. Q believes AI search providers could eventually replace traditional
engines like Google, prompting Apple's interest in these services. However, he mentioned that
these AI services might not become the default yet, as they still require improvement. Apple

(04:47):
has already initiated discussions with perplexity, according to Q.
Huggingface has launched a cloud-hosted AI agent called Open Computer Agent. Accessible
online, it uses a Linux virtual machine with applications like Firefox. Users can prompt

(05:07):
it for tasks, such as finding the Huggingface HQ in Paris using Google Maps. While it handles
simple tasks, well, complex ones, like booking flights, pose challenges, the agent struggles
with capture tests, and users might face a waiting queue based on demand. Huggingface's

(05:28):
goal wasn't to create a top-tier agent, but to show how open AI models are advancing and
becoming more affordable to run on cloud platforms. Amaric Roucher from Huggingface noted that
advanced vision models can perform complex workflows, clicking elements in a virtual
environment. Agentech technology is gaining traction, with 65% of companies exploring

(05:53):
AI agents, and the market expected to grow significantly by 2030.
And now, pivot our discussion towards the main AI topic.
Today we're going to explore a fascinating paradox in enterprise technology adoption.

(06:16):
While companies readily trust cloud services like Google Drive or Microsoft 365, with their
most sensitive data, these same organizations often hesitate to use AI tools like Gemini
or Co-Pilot, even when they're built by the same vendors on the same infrastructure.
This disconnect between cloud trust and AI's skepticism raises important questions about

(06:41):
data privacy, perception versus reality, and the future of enterprise AI adoption. To help
us navigate this complex landscape, I'm joined by data privacy expert David Bourne.
Welcome to Innovation Pulse, David.
Thanks for having me on Innovation Pulse, John. I appreciate that introduction to such

(07:01):
an intriguing paradox. The way organizations approach cloud storage versus AI services
reveals a lot about how businesses perceive risk and innovation. I'm looking forward to
our conversation today. Please go ahead with your first question.
Let's dive right into this paradox. Why do organizations seem to trust cloud storage

(07:21):
services more readily than AI services, even when they're often provided by the same
companies?
This paradox stems largely from the maturity gap between these services. Organizations routinely
store sensitive files in Google Drive or Microsoft 365, because these services are long-established
with proven track records and offer strong compliance guarantees like SoC2, ISO, GDPR

(07:45):
adherence, and business associate agreements for health data. Generative AI tools, even
when coming from Google or Microsoft, feel fundamentally different because they actively
process data in real time and generate new content. This dynamic nature creates fears
about unintended data exposure or misuse. While technically the cloud and AI services

(08:08):
may come from the same vendors and operate under similar security frameworks, many enterprises
mentally categorize them separately. We trust Drive Office 365, but we're not sure
about Gemini or Co-Pilot. It's the difference between passive storage versus active interpretation
of data.
That's an interesting distinction between passive storage and active interpretation.

(08:31):
What specifically makes AI feel more intrusive or risky to these organizations?
The key difference is in how AI interacts with data. Cloud storage is straightforward.
Files go in, the same files come out. But AI models analyze, interpret, and generate
new content based on your data, which feels inherently more invasive. There's an unpredictability

(08:52):
factor at play. And AI might make connections or generate outputs that weren't intended,
potentially exposing sensitive information in unexpected ways. Also, the black box nature
of many AI systems contributes to the uneasiness. With cloud storage, the process is transparent.
You know exactly what's happening to your data. With AI, especially large language models,

(09:16):
the inner workings are complex and often opaque. Organizations hesitate to trust what they
don't fully understand, particularly when it comes to sensitive business information.
Have there been specific incidents that have undermined trust in these AI systems and reinforced
this perception gap? Yes, several incidents have eroded confidence.
A notable example occurred in mid-2024, when reports emerged suggesting that Google's

(09:40):
Gemini AI appeared to scan private Google Drive documents without explicit user action.
Whether this was a genuine bug or a misunderstanding, the episode alarmed many users and organizations
as it suggested AI might read sensitive files behind the user's back. Even when organizations
accept the official policies on paper, stories like this magnify the perceived risk. They

(10:05):
feed into broader concerns about AI systems potentially operating in unpredictable ways,
models hallucinating, misusing data, or behaving in ways not fully understood by their creators.
The relative newness of generative AI means there's less historical evidence of consistent,
trustworthy behavior. So perception seems to be driving a lot of this hesitation. What

(10:28):
commitments are vendors making to address these concerns about their AI offerings?
First-party AI offerings are mirroring the protections of their cloud counterparts. Google
explicitly states that workspace customer data used with Gemini remains under its cloud data
processing addendum and abides by your organization's existing controls. Similarly, Microsoft's Azure

(10:49):
Open AI is designed so that prompts and outputs are not available to Open AI or other tenants and
not used to train, retrain, or improve the models. In essence, these companies are trying to extend
the trust they've built in cloud storage to these new AI capabilities by applying the same security

(11:10):
principles. The message is, if you trust us with your files, you can trust us with your AI interactions, too.
What are the current data privacy policies for specialized AI providers like Open AI and Anthropic?
Both Open AI and Anthropic have similar enterprise data policies. Open AI emphasizes that customers
own and control their data, and by default, they don't use business inputs or outputs to train

(11:35):
their models. They conduct SOC2 audits and CRIP data and allow administrators to set data retention
limits. Anthropics clawed for work. API promises not to use organizational prompts or outputs for
model training by default. Both vendors offer HIPAA compliant business associate agreements and make

(11:56):
comparable commitments. Inputs and outputs belong to the user, are encrypted, and stay out of model
training unless specifically opted in. What kind of corporate adoption are we seeing for these AI
services? Open AI reports Chad GPT has been implemented in over 80% of Fortune 500 companies.
Their customers span diverse sectors, including block and financial services, canvah and design,

(12:17):
Carlisle and private equity, a stay lauder in cosmetics, and PWC in consulting. Anthropics
clients include GitLab, Mid Journey, Slack, Lexus Nexus, and SAP. These customers use their AI for
specialized tasks like code review, customer QA, legal research, and compliance. The deployment

(12:40):
in finance, legal, and healthcare contexts suggests significant trust in data confidentiality.
Are there notable variations in how companies approach AI adoption based on region, particularly
between the US and Europe? The regional differences are striking. European companies tend to be much
more cautious, reflecting the stricter regulatory environment. The GDPR imposes significant

(13:02):
obligations around data minimization, transfer restrictions, and the right to explanation for
automated decisions. A German survey found approximately 30% of large tech firms view GDPR
as hindering AI adoption. In contrast, US firms face lighter requirements with no federal equivalent

(13:24):
to GDPR. American companies tend to be more willing to rely on contractual terms from providers
rather than avoiding cloud AI entirely. European firms often require additional data localization
and approvals, whereas US firms more readily pilot AI under existing compliance frameworks.
For organizations concerned about data privacy, what alternatives exist to using cloud-based AI

(13:47):
services? Open source models have emerged as a compelling alternative. Meta's El Alamos series,
Mistral AI's models, Falcon 40B, and others can be self-hosted on corporate infrastructure.
While they may not always match the capabilities of proprietary models, they're rapidly improving.
The primary advantage is complete control and privacy. All data stays in-house. Companies can

(14:09):
audit the models themselves and fine-tune them for specific use cases, eliminating concerns about
unseen algorithms or data reuse. What are the main disadvantages of implementing on-premise AI models?
On-premise AI is resource intensive. Companies must invest in hardware like GPUs, allocate data
center space, and build DevOps expertise. Large models require fine-tuning and maintenance,

(14:33):
demanding scarce machine learning engineering talent. Additionally, open models may still lag
behind state-of-the-art closed models on certain complex tasks. An on-premise team must continuously
track new releases and evaluate them, creating an ongoing maintenance burden that cloud services
handle automatically. Are certain industries more likely to choose on-premise models over cloud AI

(14:59):
services? Regulated industries like finance, healthcare, defense, and government are leading
the charge toward on-premise deployment. These sectors handle extremely sensitive data where
leakage could have severe legal or reputational consequences. We're also seeing hybrid approaches
emerge, where organizations use open source models for their most sensitive workflows while leveraging

(15:21):
cloud APIs for general tasks. The decision ultimately comes down to whether the value of complete data
control outweighs the additional costs and complexity. How are AI providers addressing
compliance needs for regulated industries? They're making significant accommodations. Open AI and
Anthropic offer HIPAA business associate agreements, conduct SOC2 audits, encrypt data, and provide

(15:46):
enterprise controls for data retention and access management. The providers recognize that cracking
regulated markets requires meeting industry-specific compliance standards. So they're building these
capabilities into their enterprise offerings rather than treating them as special cases.
Do smaller companies approach AI adoption differently than large enterprises? Smaller

(16:07):
companies tend to be more agile and risk tolerant in their AI adoption. Without extensive compliance
departments and legacy systems, they can move faster to integrate AI tools into their workflows.
Many startups are born in the cloud and don't have the same historical distinctions between
trusted and untrusted services. Resource constraints also play a role. Smaller

(16:31):
organizations typically can't afford sophisticated on-premise AI infrastructure,
so they're more likely to rely on cloud AI services with appropriate contractual protections.
Is the trust gap between cloud storage and AI services narrowing over time?
Yes, the gap is gradually closing as AI services mature. Organizations that were initially hesitant

(16:53):
are now running pilot programs, and successful pilots are converting to wider deployments as
confidence builds. The vendors are accelerating this process by extending familiar compliance
frameworks to their AI offerings. However, incidents like the Gemini Drive scanning controversy
show how fragile this trust can be. A single high-profile data handling issue can set back

(17:18):
adoption significantly. What advice would you give to organizations trying to make decisions about
AI adoption while protecting sensitive data? I recommend a thoughtful risk-based approach.
Understand what your actual data sensitivity requirements are? Not all data needs the same
level of protection. This allows for a hybrid approach where you might use cloud AI for lower

(17:39):
risk tasks while keeping high sensitivity operations in-house. Don't make assumptions
about vendor policies. Read the enterprise terms carefully. Consider starting with narrowly-scoped
pilot projects that allow you to build organizational comfort with AI while containing
potential risks. This gradual approach lets you develop governance frameworks before wider deployment.

(18:05):
Looking toward the future, how do you think the balance between innovation and data privacy
will evolve in the AI space? We'll see a convergence of approaches that better balances
innovation and privacy. AI providers will continue strengthening their enterprise privacy guarantees
while open-source models will keep improving in capability while becoming easier to deploy.

(18:26):
The result will be a spectrum of options, allowing organizations to match their approach to their
specific risk tolerance, from fully managed cloud AI with strong privacy controls to completely
in-house air-gapped deployments. This won't be a binary choice, but rather a continuum of
possibilities customized to each organization's needs. This has been an insightful conversation,

(18:48):
David. Before we wrap up, what final thoughts would you like to share with our listeners?
I'd encourage organizations to approach AI adoption as a strategic journey rather than a yes or no
decision. Be proactive about understanding vendor commitments, regulatory requirements,
and internal risk tolerance, but don't let perfect be the enemy of good. Many organizations are

(19:11):
finding value in starting small with controlled use cases and expanding as comfort grows. Remember
that this field is evolving rapidly. The organizations that will thrive are those that develop thoughtful
governance processes that can evolve alongside the technology, allowing them to capture the
benefits of AI while managing privacy considerations. Thank you, David, for sharing your expertise with

(19:33):
us today on Innovation Pulse. The insights you've provided will certainly help our listeners
navigate the complex intersection of AI capabilities and data privacy concerns.
To our audience, thanks for tuning in, and we'll be back next week with another deep dive into the
world of technology and innovation. Thanks again for having me, John. It's been a pleasure discussing

(19:56):
these important topics with you today. We've explored Pinterest's visual search advancements,
open AI's restructuring, Google's Gemini 2.5 Pro, AI integration and Apple Safari,
and hugging faces, a genetic technology alongside the ongoing trust and adoption challenges in

(20:20):
enterprise AI. Don't forget to like, subscribe, and share this episode with your friends and
colleagues so they can also stay updated on the latest news and gain powerful insights.
Stay tuned for more updates.
Advertise With Us

Popular Podcasts

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Introducing… Aubrey O’Day Diddy’s former protege, television personality, platinum selling music artist, Danity Kane alum Aubrey O’Day joins veteran journalists Amy Robach and TJ Holmes to provide a unique perspective on the trial that has captivated the attention of the nation. Join them throughout the trial as they discuss, debate, and dissect every detail, every aspect of the proceedings. Aubrey will offer her opinions and expertise, as only she is qualified to do given her first-hand knowledge. From her days on Making the Band, as she emerged as the breakout star, the truth of the situation would be the opposite of the glitz and glamour. Listen throughout every minute of the trial, for this exclusive coverage. Amy Robach and TJ Holmes present Aubrey O’Day, Covering the Diddy Trial, an iHeartRadio podcast.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.