All Episodes

November 2, 2025 21 mins
Opening – The Beautiful New Toy with a Rotten CoreCopilot Notebooks look like your new productivity savior. They’re actually your next compliance nightmare. I realize that sounds dramatic, but it’s not hyperbole—it’s math. Every company that’s tasted this shiny new toy is quietly building a governance problem large enough to earn its own cost center.Here’s the pitch: a Notebooks workspace that pulls together every relevant document, slide deck, spreadsheet, and email, then lets you chat with it like an omniscient assistant. At first, it feels like magic. Finally, your files have context. You ask a question; it draws in insights from across your entire organization and gives you intelligent synthesis. You feel powerful. Productive. Maybe even permanently promoted.The problem begins the moment you believe the illusion. You think you’re chatting with “a tool.” You’re actually training it to generate unauthorized composite data—text that sits in no compliance boundary, inherits no policy, and hides in no oversight system.Your Copilot answers might look harmless—but every output is a derivative document whose parentage is invisible. Think of that for a second. The most sophisticated summarization engine in the Microsoft ecosystem, producing text with no lineage tagging.It’s not the AI response that’s dangerous. It’s the data trail it leaves behind—the breadcrumb network no one is indexing.To understand why Notebooks are so risky, we need to start with what they actually are beneath the pretty interface.Section 1 – What Copilot Notebooks Actually AreA Copilot Notebook isn’t a single file. It’s an aggregation layer—a temporary matrix that pulls data from sources like SharePoint, OneDrive, Teams chat threads, maybe even customer proposals your colleague buried in a subfolder three reorganizations ago. It doesn’t copy those files directly; it references them through connectors that grant AI contextual access. The Notebook is, in simple terms, a reference map wrapped around a conversation window.When users picture a “Notebook,” they imagine a tidy Word document. Wrong. The Notebook is a dynamic composition zone. Each prompt creates synthesized text derived from those references. Each revision updates that synthesis. And like any composite object, it lives in the cracks between systems. It’s not fully SharePoint. It’s not your personal OneDrive. It’s an AI workspace built on ephemeral logic—what you see is AI construction, not human authorship.Think of it like giving Copilot the master key to all your filing cabinets, asking it to read everything, summarize it, and hand you back a neat briefing. Then calling that briefing yours. Technically, it is. Legally and ethically? That’s blurrier.The brilliance of this structure is hard to overstate. Teams can instantly generate campaign recaps, customer updates, solution drafts—no manual hunting. Ideation becomes effortless; you query everything you’ve ever worked on and get an elegantly phrased response in seconds. The system feels alive, responsive, almost psychic.The trouble hides in that intelligence. Every time Copilot fuses two or three documents, it’s forming a new data artifact. That artifact belongs nowhere. It doesn’t inherit the sensitivity label from the HR record it summarized, the retention rule from the finance sheet it cited, or the metadata tags from the PowerPoint it interpreted. Yet all of that information lives, invisibly, inside its sentences.So each Notebook session becomes a small generator of derived content—fragments that read like harmless notes but imply restricted source material. Your AI-powered convenience quietly becomes a compliance centrifuge, spinning regulated data into unregulated text.To a user, the experience feels efficient. To an auditor, it looks combustible. Now, that’s what the user sees. But what happens under the surface—where storage and policy live—is where governance quietly breaks.Section 2 – The Moment Governance BreaksHere’s the part everyone misses: the Notebook’s intelligence doesn’t just read your documents, it rewrites your governance logic. The moment Copilot synthesizes cross‑silo information, the connection between data and its protective wrapper snaps. Think of a sensitivity label as a seatbelt—you can unbuckle it by stepping into a Notebook.When you ask Copilot to summarize HR performance, it might pull from payroll, performance reviews, and an internal survey in SharePoint. The output text looks like a neat paragraph about “team engagement trends,” but buried inside those sentences are attributes from three different policy scopes. Finance data obeys one retention schedule; HR data another. In the Notebook, those distinctions collapse into mush.Purview, the compliance radar Microsoft built to spot risky content, can’t properly see that mush because the Notebook’s workspace acts as a transient surface. It’s not a file; it’s a conversation layer. Purview scans files, not contexts, and therefore misses half the derivati
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Copilot notebooks look like your new productivity savior. They're actually
your next compliance nightmare. I realize that sounds dramatic, but
it's not hyperbole. It's math. Every company that's tasted this
shiny new toy is quietly building a governance problem large
enough to earn its own cost center. Here's the pitch.
A notebook's workspace that pulls together every relevant document, slide deck, spreadsheet,
and email, then lets you chat with it like an

(00:21):
omniscient assistant. At first, it feels like magic. Finally, your
files have context. You ask a question, it draws in
insights from across your entire organization and gives you intelligent synthesis.
You feel powerful, productive, maybe even permanently promoted. The problem
begins the moment you believe the illusion. You think you're
chatting with a tool, you're actually training it to generate
unauthorized composite data text that sits in no compliance boundary,

(00:45):
inherits no policy, and hides in no oversight system. Your
copilot answers might look harmless, but every output is a
derivative document whose parentage is invisible. Think of that for
a second. The most sophisticated summarization engine in the Microsoft ecosystem,
producing text with no oh lineage tagging. It's not the
AI response that's dangerous. It's the data trail that leaves
behind the breadcrumb network no one is indexing. To understand

(01:07):
why notebooks are so risky, we need to start with
what they actually are beneath the pretty interface. What Copalant
notebooks actually are. A Coplant notebook isn't a single file.
It's an aggregation layer, a temporary matrix that pulls data
from sources like SharePoint, one, drive teams, chair threads, maybe
even customer proposals your colleague buried in a subfolder three
reorganizations ago. It doesn't copy those files directly. It references

(01:28):
them through connectors that grant AI contextual access. The notebook is,
in simple terms, a reference map wrapped around a conversation window.
When users picture a notebook, they imagine a tidy word document. Wrong.
The notebook is a dynamic composition zone. Each prompt creates
synthesized text derived from those references. Each revision updates that synthesis,
and like any composite object, it lives in the cracks

(01:50):
between systems. It's not fully SharePoint. It's not your personal
one drive. It's an AI workspace built on ephemeral logic.
What you see is AI construction, not human authorship. Think
of it like giving Copilot the master key to all
your filing cabinets, asking it to read everything, summarize it,
and hand you back a neat briefing, then calling that
briefing yours. Technically it is legally and ethically, that's blurrier.

(02:11):
The brilliance of this structure is hard to overstate. Teams
can instantly generate campaign recaps, customer updates, solution drafts, no
manual hunting. Ideation becomes effortless. You query everything you've ever
worked on and get an elegantly phrased response in seconds.
The system feels alive, responsive, almost psychic. The trouble hides
in that intelligence. Every time Copilot uses two or three documents,

(02:31):
it's forming a new data artifact. That artifact belongs nowhere.
It doesn't inherit the sensitivity label from the HR record
it summarized, the retention rule from the finance sheet it's cited,
or the meta data tags from the PowerPoint it interpreted.
Yet all of that information lives invisibly inside its sentences.
So each notebook session becomes a small generator of derived content,

(02:51):
fragments that read like harmless notes but imply restricted source material.
Your AI powered convenience quietly becomes a compliance centrifuge, spinning
regular data into unregulated text. To a user, the experience
feels efficient. To an auditor, it looks combustible. Now that's
what the user sees. But what happens under the surface,
where storage and policy live, is where governance quietly breaks.

(03:12):
The moment governance breaks, Here's the part everyone misses the notebooks.
Intelligence doesn't just read your documents. It rewrites your governance
logic the moment Copilot synthesizes cross silo information. The connection
between data and its protective wrapper snaps. Think of a
sensitivity label as a seat belt. You can unbuckle it
by stepping into a notebook. When you ask Copilot to
summarize HR performance, it might pull from payroll performance reviews

(03:35):
and an internal survey in SharePoint. The output text looks
like a neat paragraph about team engagement trends, but buried
inside those sentences are attributes from three different policy scopes.
Finance data obeys one retention schedule, HR data another In
the notebook, those distinctions collapse into mush. Perview, the compliance
radar Microsoft built to spot risky content can't properly see

(03:57):
that mush because the notebook's workspace acts as a trans
in surface. It's not a file, it's a conversation layer.
Perview scans files, not contexts, and therefore misses half the
derivatives users generate during productive sessions. Data loss prevention or DLP,
has the same blindness. DLP rules trigger when someone downloads
or emails a labeled file, not when AI rephrases that

(04:18):
file's content and spit shines it into something plausible but
policy free. It's like photocopying a stack of confidential folders
into a new binder and expecting the paper itself to
remember which pages were top secret. It won't. The classification
meta data lives in the originals. The copy is born naked.
Now imagine the user forwarding that AI crafted summary to
a colleague who wasn't cleared for the source data. There's

(04:40):
no alert, no label, no retention tag, just text that
feels safe because it came from copilot. Multiply that by
a whole department, and congratulations, you have a shadow data lake.
A collection of derivative insights. Nobody has mapped, indexed, or
secured the shadow data. Lake. Sounds dramatic, but it's mundane.
Each notebook persists as cached context in the Copilot system.
Some of those contexts linger in the user's Microsoft three

(05:03):
sixty five cloud cash, others surface in exported documents or
pasted team's posts. Suddenly, your compliance boundary has fractal edges
too fine for traditional governance to trace. And then comes
the existential question who owns that lake? The user who
initiated the notebook, their manager who approved the project, the
tenant admin Microsoft. Everyone assumes it's in the cloud somewhere,

(05:23):
which is organizational shorthand for not my problem, except it
is because regulators won't subpoena the cloud, they'll subpoena you.
Here's the irony. Copilot works within Microsoft's own security parameters.
Access control, encryption and tenant isolations still apply. What breaks
is inheritance. Governance assumes content lineage, AI assumes conceptual relevance.

(05:44):
Those two logics are incompatible, So while your structure remains
technically secure, it becomes legally incoherent once you recognize that
each notebook is a compliance orphan, you start asking the
unpopular question, who's responsible for raising it? The answer predictably
is nobody, until audit season arrives and you discover your
often has been very busy reproducing. Now that we've acknowledged
the birth of the problem, let's follow it as it

(06:04):
grows up into the broader crisis of data lineage, the
data lineage and compliance crisis. Data lineage is the genealogy
of information, who created it, how it mutated, and what
authority governs it. Compliance depends on that genealogy. Lose it,
and every policy built on it collapses, like a family
tree written on a napkin. When Copilot builds a notebook summary,
it doesn't just remix data, it vaporizes the family tree.

(06:26):
The AI produces sentences that express conclusions sourced from dozens
of files, Yet it doesn't embed citation metadata to a
compliance officer, that's an unidentified adoptive child who were its parents?
HR finance a file from legal data. Last summer, Copilot
drugs its job was understanding not remembering. Record Keeping thrives
on provenance. Every retention rule, every right to be forgotten request,

(06:48):
every audit trail assumes you can trace insight back to origin.
Notebooks sever that trace. If a customer requests deletion of
their personal data, GDPR demands you verify purging in all
derivative storage, but notebooks BLD what counts as storage. The
content isn't technically stored, it's synthesized. Yet pieces of that
synthesis re enter stored environments when users copy, paste, export,

(07:09):
or reference them elsewhere. The regulatory perimeter becomes a circle
drawn in myst picture. An analyst asking copilot to summarize
a revenue impact report that referenced credit card statistics under
PCI compliance. The AI generates a paragraph retail growth driven
by premium card users, no numbers, no names, so it
looks benign. That summary ends up in a sales pitch deck. Congratulations,
sensitive financial data has just been laundered through an innocent sentence.

(07:31):
The origin evaporates, but the obligation remains. Some defenders insist
notebooks are temporary scratch pads. Theoretically that's true. Practically users
never treat them that way. They export answers to word,
email them, staple them into project charters. The scratch pad
becomes the published copy. Every time that happens, the derivative
data reproduces. Each reproduction inherits none of the original restrictions,

(07:54):
making enforcement impossible downstream. Try auditing that mess. You can't
tag what you can't trace. Pervius catalog list the source
documents neatly, but the notebooks offspring appear nowhere. Version control irrelevant.
There's no version record because the AI over wrote itself conversationally.
Your auditlog shows a single session ID, not the data
fusion it performed inside. From a compliance standpoint, it's like

(08:14):
reviewing CCTV footage that only captured the doorway, never what
happened inside the room. Here's the counterintuitive twist. The better
copilot becomes, the worse this gets. As the model learns
to merge context semantically, it pulls more precise fragments from
more sources, producing output that is more accurate but less traceable.
Precision inversely correlates with auditibility. The sharper the summary, the

(08:34):
fainter its lineage. Think of quoting classified intelligence during a
water cooler chat. You paraphrase it just enough to sound clever,
then forget that technically you just leaked state secrets. That's
how notebooks behave. Quoting classified inside and colloquial form. Without
metadata inheritance, compliance tooling has nothing to grip. You can't
prove retention, deletion, or authorization. In effect, your enterprise creates

(08:56):
hundreds of tiny, amnesiac documents, each confident in its own
of thinguthority, none aware of its origin story. Multiply by months,
and you've replaced structured record keeping with conversational entropy. Regulators
don't care that the data was synthesized by AI. They'll
treat it as any other uncontrolled derivative, and internal policies
are equally unforgiving. If retention fails to propagate, someone signs

(09:16):
an attestation that becomes incorrect the moment a notebook summary
escapes its bounds. So the lineage issue isn't philosophical, it's
quantifiable liability. Governance relies on knowing how something came to exist.
Copilot knows that it exists, not from where. That single
difference turns compliance reporting into guesswork. The velocity of notebooks
ensures the guesswork compounds. Each new conversation references older derivatives.

(09:39):
You're often data raising new orphans. Before long, entire internal
reports are built on untraceable DNA If the architecture and
behavior manifest governance chaos, the next logical question is can
you govern chaos deliberately? Spoiler, You can, but only if
you admit its chaos first. That's where we go next.
How to regain control. Let's rephrase the chaos into procedure.
The only cure for derivative entry is deliberate governance rules

(10:02):
that treat AI output as first class data, not disposable conversation.
You can't prevent copilot from generating summaries anymore than you
can stop employees from thinking, but you can shape how
those thoughts are captured, labeled, and retired before they metastasize
into compliance gaps. Start with the simplest safeguard, default sensitivity
labeling on every notebook output. The rule should be automatic,

(10:23):
tenant wide, and impossible to opt out of. When a
user spawns a notebook, the first line of policy says,
any content derived here inherits the highest sensitivity of its sources.
That approach may feel conservative, it is, but governance always
errs on the side of paranoia. Better to overprotect one
brainstorming session than defend a subpoena where you must prove

(10:43):
an unlabeled summary was harmless. Next, monitor usage through perview
audit logs. Yes, most administrators assume perview only tracks structured files.
It doesn't have to. You can extend telemetry by correlating
notebook activity events, session created, query executed output shared with
DLP alerts. If a user repeatedly expo boards notebook responses
and emails them outside the tenant, you have early warning

(11:03):
of a shadow lake expanding. In other words, pair copilot's
productivity metrics with your compliance dashboards. It's not surveillance, it's hygiene.
Restrict sharing. By design, a notebook should behave like a
restricted lab, not a cafeteria. Limit external collaboration groups, disable
public sharing links, and bind each notebook to an owner
role with explicit retention authority. That owner becomes responsible for

(11:24):
life cycle enforcement, versioning, archiving, and deletion. At project close.
Treat the notebook container as transient. Its purpose is discovery,
not knowledge storage. Now introduce a new concept. Your compliance
team will eventually adore derived data policies. Traditional governance stops
at the document level, derive data takes aim at the offspring.
These are policies that define obligations for synthesized content itself

(11:45):
For example, AI generated summaries must inherit data classification tags
from parent inputs. If confidence above sixty percent, that sounds technical,
because it is you're requiring the AI to surface lineage
in metadata form, whether Microsoft exposes those hooks now or later.
Design your policy frameworks assuming they will exist future, proving
your bureaucracy life cycle management follows naturally, Each notebook should

(12:08):
have an expiration date thirty sixty or ninety days by default.
When that date arrives, output either graduates into a governed
document library or is lawfully forgotten, no in between. If
users need to revisit the synthesis, they must rehydrate it
from governed sources. The rule reinforces context freshness and truncates
lingering exposure pair expiration with version history. Even scratch spaces

(12:30):
deserve audit trails. A notebook container without logs is a
sandbox without fences. Let's pivot from policy ideals to human behavior,
because every compliance breach begins with optimism. Stories help One enterprise,
Large International, very proud of its maturity model, discovered that
a co pilot notebook summarizing bid data had accidentally leaked
into vendor correspondence. The notebook pulled fragments from archived contract

(12:53):
proposals that were never meant to see daylight again, why
retention didn't propagate? The AI summary included paraphrased wind loss metrics,
and an enthusiastic analyst pasted them into an external email,
unaware the numbers trace back to restricted archives. It wasn't espionage,
it was interface convenience. Governance failed quietly because nobody thought
synthetic text needed supervision. That incident produced a new cultural mantra.

(13:15):
If Copilot wrote it, classify it. It sounds blunt, but
clarity beats complexity. The company now labels every Copilot generated
paragraph as confidential until manually reviewed. They build conditional access
rules that block sharing until content reviewers certify the derivative
is safe. Its slows workflows slightly, but compared to breach cost,
its negligible. The next layer of defense isn't technology, its conversation.

(13:37):
Bring it, compliance and business units together to define governance boundaries.
Too often, every department assumes the others are handling AI oversight.
The result nobody does form a cross functional council that
decides which data sets can legally feed Copilot notebooks. How
summaries are stored, and when deletion becomes mandatory, the same
meeting should define remediation protocols for existing Often notebooks run perview, scans,

(13:59):
classes outputs, manually archive or perg. At an operational level,
the process resembles environmental cleanup. You identify contamination, discover, analyze
its origin, classify, contain the spill, restrict access, and enforce remediation.
Delete or reclassify the rhythm discover, classify, contain, and force
translates perfectly into copilot governance. You're not punishing innovation, you're

(14:21):
building waste management for digital runoff. There's also a cultural
trick that works better than any policy binder. Anthropomorphise your AI.
Treat copilot like an over eager intern, brilliant at digesting information,
terrible at discretion. You'd never let an intern email client's
unsupervised or store confidential notes on a USB stick. Apply
the same instinct here. Before sharing a notebook output, ask

(14:42):
would I send this if an intern wrote it? If
the answer is no, label it or deleted, revisit training
materials to emphasize that generative convenience doesn't neutralize responsibility. Every
AI summary is a drafted record remind users that temporary
doesn't mean exempt. Push this awareness through onboarding, internal newsletters,
even casualty briefings. Governance isn't just technology, it's etiquette encoded

(15:03):
into routines. If you're wondering when all this becomes automatic,
you're not alone. Microsoft's roadmap hints that future perview releases
will natively ingest copilt artifacts for classification and retention control.
But until those APIs mature, the manual approach, naming conventions,
periodic audits, and cross department accountability remains mandatory. Think of
your enterprise as writing the precedent before the law exists.

(15:25):
Voluntary discipline today becomes regulatory compliance tomorrow. And yes, this
governance work costs time, but disorder collects compound interest. Every
unlabeled notebook is a liability accruing silently in your cloud.
You either pay up early with structure or later with lawyers,
your choice, though one of them bills by the hour.
Control isn't optional anymore. AI governance isn't an abstraction, it's infrastructure.

(15:48):
Without it, intelligent productivity becomes an intelligent liability. Let's zoom
out and confront the ecosystem issue. The accelerating gap between
how fast AI evolves and how slowly compliance catches up
the future of AI governance In M three sixty five,
governance always evolves slower than innovation. It's not because compliance
officers lack imagination. It's because technology keeps moving the goalpost

(16:10):
before the paint dries. Copilot notebooks are another case study
in that phenomenon. Microsoft, to its credit, is already hinting
at an expanded Perview framework that can ingest AI generated artifacts,
label them dynamically, and even trace the source fragments behind
each synthesized answer. It's coming, but not fast enough for
the enterprises already swimming in derivative content. Microsoft strategy is
fairly predictable. First comes visibility, then control, then automation. Expect

(16:34):
Copilot's future integration with Perview to include semantic indexing, AI reading,
AI scanning your generated summaries to detect sensitive data, a
drift in plain language that indexing could classify synthesized text
based not on file lineage, but on semantic fingerprinting patterns
of finance data regulated terms or PII expressions recognized contextually.

(16:54):
Essentially compliance that reads comprehension rather than meta data. Auditlogs
will follow. The current blogs show who opened a notebook
and when. Future ones will likely show what the AI referenced,
how it synthesized, and which sensitive elements it might have inherited.
Imagine a compliance dashboard where you can trace an AI
sentence back to its contributing documents, the same way version
history traces edits in SharePoint. That's the dream of fully

(17:17):
auditible semantic chain. When that arrives, governance finally graduates from
forensic to proactive. Now I can already hear the sigh
of relief from risk teams. Good, Microsoft will handle it. Incorrect,
Microsoft will enable it. You will still own it. Governance
doesn't outsource well. Every control surface they release needs configuration, tuning,
and crucially interpretation. A mislabeled keyword or an overzealous retention

(17:39):
trigger can cripple productivity faster than a breach, ever could.
This is where enterprises discover the asymmetry between platform features
and organizational discipline. Tools don't govern people do, and yet
some optimism is warranted. Dependency on cloud architecture has forced
Microsoft to adopt a shared responsibility model. Security is theirs,
compliance is yours with co pilot artifacts. Expect that division

(18:01):
to sharpen. You'll get APIs to export audit data, connectors
to pull notebook mata data into purview and policy maps,
linking AI containers to business units. What you won't get
is an automatic conscience. The tools can detect risk patterns,
they can't decide acceptable risk tolerance. The fascinating part is
philosophical knowledge. Work now produces meta data faster than humans
can label it. Every sentence, your AI rights becomes both

(18:23):
content and context, a self documenting concept that mutates with use.
The distinction between record and commentary dissolves. That makes the
compliance challenge not metaphorical but ontological. What is a document
when the author is probabilistic? Traditional filing systems expected discrete artifacts.
This report that file AI erases those edges. Instead, you
govern flows of knowledge, not fixed outputs. In that world,

(18:46):
perview and DLP will evolve from file scanners to contextual interpreters.
Compliance engines that score risk continuously the way antivirus scans
for behavioral anomalies. The control won't happen post creation, it
will happen mid conversation. Policies will execute while you tie,
not after you save. Ironic, isn't it. The safer AI
becomes at preventing leaks, the more dangerous its unmanaged by

(19:06):
products grow. Guardrails reduce immediate exposure, but multiply the debris
of derivative data behind the scenes. Safer input leads to
riskier shadow output. It's like building a smarter dam. The
water doesn't disappear, it just finds smaller cracks. To fix that,
enterprises will establish something resembling an AI registry, a catalog
of generated materials automatically logged at creation. Each copilot session

(19:29):
could deposit a record into this registry prompt data, sources, sensitivity, tax,
retention date. Think of it as a digital birth certificate
for every AI sentence. The registry wouldn't judge the content,
it would prove existence and lineage, so governance can follow.
This is where the ecosystem heads. AI writing entries into
a secondary system documenting itself. Governance becomes recursive artificial intellect,

(19:51):
producing compliance meta data about its own behavior. Slightly terrifying, yes,
but also the only scalable model. Humans can't possibly index
every conversation. Algorithm will have to regulate their own progeny.
So the moral arc bends toward visibility. What began as
transparent productivity will require transparent accountability in the end. AI
governance in M three sixty five won't be about building

(20:11):
fences around machines. It'll be about teaching them to clean
up after themselves. Beautiful tools as always leave terrible messes
when no one asks who holds the mob The real
risk isn't the feature, it's the complacency. Copilot notebooks aren't villains,
They're mirrors showing how eagerly organizations trade traceability for convenience.
Each elegant summary disguises a silent transfer of accountability from

(20:34):
systems that documented to systems that merely remembered. The warning
is simple. Every AI generated insight is a compliance artifact
waiting to mature into liability. The technology doesn't rebel, it
obeys the parameters you forgot to define. You can't regulate
what you refuse to acknowledge, and pretending temporary workspaces don't
count is the digital equivalent of sweeping filings under the
server rack. Complacency is the accelerant. Companies got burned by

(20:57):
teams sprawl by SharePoint drives that beg came digital hoarding
facilities by powerbie dashboards, nobody secured properly. Notebooks repeat the
pattern with better grammar. The novelty hides the repetition. The
fix isn't fear its forethought. Build the rules before regulators do,
mandate labels, audit usage, Teach people that AI convenience doesn't
mean moral outsourcing. Governance isn't a wet blanket over innovation.

(21:20):
It's the scaffolding that keeps progress from collapsing under its
own cleverness. Productivity used to mean saving time, Now it
has to mean saving evidence. The quicker your organization defines
notebook policies, how creations are stored, tracked, and retired, the
less clean up you'll face when inspectors, auditors or litigators
start asking where the AI found its inspiration. So audit
those notebooks, map that shadow data lake while it's still

(21:40):
knee deep. And if this breakdown saved you a future
compliance headache, you know what to do. Subscribe, stay alert,
and maybe note who's holding the mob. Efficiency is easy.
Accountability is optional only once
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.