All Episodes

April 22, 2025 97 mins

Guest Info:

  • Name:       Bronwen Aker

  • Contact Information (N/A): https://br0nw3n.com/ 

  • Time Zone(s): Pacific, Central, Eastern

 

–Copy begins–

 

Disclaimer: The views, information, or opinions expressed on this program are solely the views of the individuals involved and by no means represent absolute facts. Opinions expressed by the host and guests can change at any time based on new information and experiences, and do not represent views of past, present, or future employers.

 

Recorded: https://youtube.com/live/guhM8v8Irmo?feature=share 

 

Show Topic Summary: By harnessing AI, we can assist in being proactive in discovering evolving threats, safeguard sensitive data, analyze data, and create smarter defenses. This week, we’ll be joined by Bronwen Aker, who will share invaluable insights on creating a local AI tailored to your unique needs. Get ready to embrace innovation, transform your work life, and contribute to a safer digital world with the power of artificial intelligence! (heh, I wrote this with the help of AI…)



Questions and topics: (please feel free to update or make comments for clarifications)

  1. Things that concern Bronwen about AI: (https://br0nw3n.com/2023/12/why-i-am-and-am-not-afraid-of-ai/)
    Data Amplification: Generative AI models require vast amounts of data for training, leading to increased data collection and storage. This amplifies the risk of unauthorized access or data breaches, further compromising personal information.

  2. Data Inference: LLMs can deduce sensitive information even when not explicitly provided. They may inadvertently disclose private details by generating contextually relevant content, infringing on individuals’ privacy.

  3. Deepfakes and Misinformation: Generative AI can generate convincing deepfake content, such as videos or audio recordings, which can be used maliciously to manipulate public perception or deceive individuals. (Elections, anyone?)

  4. Bias and Discrimination: LLMs may inherit biases present in their training data, perpetuating discrimination and privacy violations when generating content that reflects societal biases.

  5. Surveillance and Profiling: The utilization of LLMs for surveillance purposes, combined with big data analytics, can lead to extensive profiling of individuals, impacting their privacy and civil liberties.

  6. Setting up a local LLM? CPU models vs. gpu models
    pros/cons? Benefits?

  7. What can people do if they lack local resources?
    Cloud instances? Ec2? Digital Ocean? Use a smaller model?

  8. https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/ 

  • AI codi

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(01:00:04):
Okay, everybody.
Hello.
Hey, all my guests.
And co-hosts are here.
I'm Brian, this is Brakes Suck.
And welcome everybody.
Hello. Hey,
it's working.
Yay.
Okay, I think we had a little double
there for a second, but

(01:00:25):
I think we're better now.
So it's fun.
I'm Brian and we're here with Bronwyn,
who I had mentioned earlier, was speaking
at a great number of places about AI.
And I'll let Bronwyn introduce herself,
but I met her at B-side San Diego.

(01:00:47):
She was giving a workshop
on making your own local AI.
And I wanted to do that course.
Unfortunately, I was gooning for that.
I had to do security for our conference.
And I was like, oh, I would
have loved to have sat in.
And then I was like, wait a minute.
I have another venue that I can put her
in, if she'll do so.

(01:01:07):
And she was amenable to it.
And we finally got this started.
So yeah, Bronwyn, thank you for coming.
And please let the world know about
yourself, how you got to where you are
and your interests in AI.
We're supposed to keep this to only a
couple of hours, right?

(01:01:29):
Yeah, my name's Bronwyn Aker.
I work for a company called Black Hills
Information Security.
Before that, I worked for another small
company you might've heard
of called the Sands Institute.
And before doing that, I did a lot of web
development all over here in Yonder and

(01:01:50):
various other kinds of development.
And it's embarrassing to admit to how
long I have actually
been building web pages.
Clue 1992 was my first webpage.
Before StyleShades.
Very much before StyleShades.

(01:02:11):
I remember the days when
we had to use the font tag.
I survived the blank tag. Yes,
marquee tag.
Marquee tag baby, yeah, that's all.
So these, so I've been on various
bleeding edges of new technologies.

(01:02:32):
And my latest techno obsession is
artificial intelligence.
For the most part, I've been focusing on
large language models, but I do keep eyes
and ears out for things
going on with other kinds of AI.
Artificial intelligence is such a vast,

(01:02:53):
vast universe that it's impossible to
encompass all of it.
But I try to keep an ear
out for the big movements.
And yeah, I've been
doing talks lately about it.
And I've been floored by how well
received not only, well received they've

(01:03:14):
been, not only in terms of the fact that
I was talking about it, but also that I
was hitting a need in terms of what
people needed to know that maybe hasn't
been made as
accessible to people in general.
Okay, very cool.
So that's why I do.
Very nice.

(01:03:36):
So we have a few topics to discuss.
You've got a website that I've cribbed
from a little bit based on.
I think the first one I saw when I looked
through this was Bronwyn's blog post from
2023 and says, "Why I am
and am not afraid of AI."

(01:03:58):
And so I read through this.
And the first thing I saw was the why
you're not, or the why you are afraid of
it is longer than why
you're not afraid of it.
Yes,
I think that's the right.
One side was longer than the other.

(01:04:19):
Yeah, one side of the
argument is longer than the other.
Yes, in this case.
And it was like why you're, what
frightens you about AI was longer than
what doesn't frighten you about AI.
And I was like, oh, okay.
So I wanted to talk a little bit about
why it doesn't frighten you.
Because I think we want to speak only
about the positives of this bit.

(01:04:41):
So there has been a lot of fun about AI,
but it's gonna replace all the developers
in the next 18 months.
Security people are gonna go away.
It's gonna solve all of our problems.
That's not what I'm not worried about.
But I mean, so in your opinion, why does

(01:05:03):
AI not frighten you?
Well, it frightens me
and it doesn't frighten me.
And the reasons have to do
with use cases in most situations.
There are many opportunities where AI can
dramatically improve quality of life, not

(01:05:24):
just for you and me, but for every human
being on the planet.
And leveraged appropriately, used
appropriately, it could help us with
areas where we have
had blockers for decades.
On the other hand, in the hands of people
with malicious intent.
And that can be mind control

(01:05:49):
or any form of
disenfranchisement of humans.
It could be a very powerful tool.
And unfortunately we are seeing some of
that already because we just are.
Right, right.
Is that because it's in control or it's
being controlled by people who are just

(01:06:10):
trying to turn a buck?
I mean, is there legitimate like non-SAM
ultimate ways of using AI
and AI in a positive way?
Well, unfortunately it
comes down to the classic adage.
Knowledge is power.
And one of the
unfortunate realities of power,

(01:06:32):
people sometimes become addicted to it.
And it doesn't matter what it will take
or who gets hurt in order
for them to get their next fix.
Okay, all right.
Ms.
Berlin, Ms.
Betcher, please feel free
to, you know, get involved.
Please, please.
Yeah, I mean, I was just gonna say like

(01:06:54):
that and not even
purposefully malicious intent.
Like just think, I just consider it even
more security job security because just
think of how like the ease
of access of all of it now
and people are just gonna implement it

(01:07:15):
wrong in so many places.
And we're just gonna continue to have
harder problems for us to solve.
Yep.
Because of it.
Yeah, I mean, the same
thing came up with automation.
You know, there was examples of,
you know, what was it like three, four,

(01:07:37):
five years ago, they were talking about,
you know, machine learning at the time
was still in its infancy, but it was
gonna replace all of security people.
You know, with the rise of the new AI
models that have gone on, there's been a
suggestion that, you know, that will make
some of our jobs obsolete, like SOC
analysts and things like that.
Was that when the next
gen buzzwords came out?

(01:08:01):
Was that that particular way?
Yeah, next gen- It's hard to keep track.
Yeah, blockchain enabled.
There's always gonna be a thumbs over bullet.
Yeah, next gen firewalls.
That's right.
Your network engineers are
gonna, you know, go away.
They're gonna have to get, you know,
construction jobs or something.
They'll have all the same problems, but
now AI causes happen to them.

(01:08:24):
Exactly, exactly.
Personally, I want an AI that can go
around and clean up after my
dogs without adult supervision.
Right.
Get me something like, develop an AI that
can do that, you'll make bank.
Yes.
That's right.
Build a better mousetrap, right?
Yep. Yep.
So there are examples of that.
I think you mentioned that in the Discord

(01:08:45):
here that, you know, we should be using
this as a tool to help us with automation
to increase our output you know, I've
mentioned several times on the streams,
not on Sundays, but I'm learning coding
using a Python AI that is
called Boots on boot.dev.

(01:09:06):
And, you know, it's helped me understand
coding better because I don't sit and
flail when I have an issue.
I can ask Boots in this
case how to fix the issue.
And there's a whole thing, they call it
vibe coding now, depending on who you
talk to, you get blocked or whatever.
I don't know what that means.
You know, I hear I'm
cool if I say that word.

(01:09:28):
No, you're not.
It means that you're clueless.
Well, I am clueless.
I'm trying several times to learn Python
and it hasn't stuck, so.
I think vibe coding is just the next
version of people copying and pasting
from Stack Overflow.
That's exactly it.
Right?
Vibe coding is, and I think I dropped out

(01:09:50):
for a second, vibe coding is taking
whatever an AI gives you and copy-pacing
it without taking a look to see whether
it got it right, without checking to see
if it actually does what
it says it's supposed to do.
None of that stuff.
So vibe coding, Amanda,
you said it perfectly.
It's the latest thing of
copy-pacing from whatever Stack.

(01:10:13):
Yeah.
Right, right.
And then I put an example in the show
notes of one that we talked
about at work the other day.
Where,
speaking of vibe coding, people are just
copying and pasting code from AI and it's
like hallucinating packages.
Yeah.
Right?
So like attackers are like typo squatting

(01:10:38):
on those fake packages.
And I mean, we've seen that too in like
typos in other code, right?
There's been a typo on Stack Overflow or
a link that doesn't go anywhere anymore.
And yeah, it's the
same concept just faster.
Right, right.
But your use case, Brian, is one of the

(01:11:00):
things that I hammered on a fair amount
when I was doing the
prompt engineering talks.
And one of the things that I specifically
mentioned is any person who feels at all
uncomfortable for any reason, asking a
coworker or someone else, or maybe you

(01:11:22):
don't have someone with expertise in what
it is you're trying to learn.
An LLM, especially something like
ChatGPT, Claude, and the various ones out
there, they can be massively helpful to
an individual who likes to, who enjoys
self-teaching, who enjoys
learning things on their own.

(01:11:43):
It becomes a completely non-judgmental
resource to ask questions and refine your
own thinking if you
choose to use it in that way.
And that's where it's a tool.
Used well, it can make so
many points of pain go away.
Right, except PowerShell.

(01:12:05):
(Both Laughing)
Exactly.
Well, it could query some PowerShell, but
it can teach you, you know.
And a lot of the Stack Overflow comments
are, because you're not checking because
you're not skilled yourself and there's
some blind trust, right?
We blind trust a lot of new technology.
It's whether or not you are
in a new Tesla or shit car,

(01:12:30):
or blockchain, everybody's trying to get
on blockchain, everybody's trying to get
on AI because it's a force multiplier.
It's a way to lay people off in some
cases to make save money.
But yeah, so there's a lot that can be
done with it in a positive manner, but

(01:12:50):
for me, it feels like a lot of companies
are just trying to bolt AI onto every
product they have, whether or not it
actually makes sense, or
it's like the blockchain stuff.
Everything's enabled to blockchain.
It's like, well, how are
you actually using blockchain?
Well, we're either using it improperly,
or we're just saying it has blockchain.

(01:13:11):
It really doesn't have
anything like that in there.
And there's only very few that are
actually using AI or blockchain in a
legitimate way, because they've invested
a ton of money in this, so they're
looking for that return on investment.
And there's already been people in the
industry, like I think Satin Adela from
Microsoft was starting
to temper expectations of,

(01:13:34):
we've spent $10 billion and we're still
expecting maybe to get our return on
investment sometime
between now and the year 3000.
So,
Luck with that.
Yeah, and you say AI and
then people throw money at you.
So, yeah, I'm not pessimistic about it,

(01:13:56):
because as somebody who is of a certain
age and is trying to upskill, having it
as a tool to use and being able to
understand at least what's going on when
I'm writing the code makes sense.
And I find that quite interesting.
So,

(01:14:17):
you say you work for Black Hills and you
do your technical writer as well, right?
You do some technical writing?
What I started doing for them at first
was just editing and writing,
co-authoring their
reports from Pentesters.

(01:14:38):
And back then, we had maybe
50 people all told at BHIS.
It's grown quite a bit since then.
And now there are four of us who do the
Grammarian style writing and the peer
editing and the reviewing is done by
other testers within the testing core.

(01:15:00):
So,
every report gets a lot of eyes on it
before it goes out the door.
Okay.
But in addition, now I'm
doing stuff related to AI.
I've done a couple of other things
involved with anti-siphon and other
tribes within the
BHIS tribe of companies.

(01:15:22):
So, I wasn't trying to jump around here.
My thought was I know that when I worked
at Leviathan a few years ago, we were
trying to create like a,
you know, if you have a cross-hate
scripting, we tried to have not canned
language, but we tried to have language
that was very similar as like this is
what cross-hate scripting is.

(01:15:43):
And then, you know, we would have the
recommendations based on
that engagement at the time.
I imagine with the rise of AI, that
you're going to have people who are gonna
be writing reports and using things like
chat GPT to say, hey, you know, I've got
insecure direct object reference.
Please, as an, you know, if you're an
expert in, you know, in web app security,

(01:16:04):
please explain to me that in a way to
save time and effort
of writing that report.
And I imagine that, you know, because I
used to work with people who had to write
these reports, nobody
likes writing those reports.
Even though it's what they pay for, what
the customer pays for,

(01:16:25):
the temptation is there to use that. So,
are we leveraging it at certain levels,
like, you know, consultancies for that?
I don't want to say that, you know, y'all
are using that, but I was like, is that

(01:16:45):
being used in our industry
to, you know, more often now?
Oh, absolutely, absolutely.
I know for a fact
that a lot of people in,
throughout the entire
industry are using AI.
And we still, we have
an official AI policy.
We have things that we've
defined for the time being.

(01:17:07):
This is acceptable use
for business purposes.
This is not. And,
but given how quickly AI is evolving, the
policy that we have today may be totally
different a week from now because of how
quickly that landscape is changing.
And the beauty of it, again, it all

(01:17:28):
depends on how it's used.
If somebody is knowledgeable,
the thing that I love about,
and we'll just stick with
LLMs for the time being,
my roommate is a retired military
firefighter and has been gone, he's gone
back to college for a

(01:17:49):
degree in emergency management.
He knows what he's talking about.
He knows what he wants to
say, but he's not a writer.
Right.
And what the large language models do is
empower people who have never,
for whatever reason, their brain doesn't

(01:18:11):
work well when it comes to stringing
words together to communicate things.
But he can recognize, yes, that's what I
want to say, or no,
that's not what it isn't.
And that's how he uses it.
And the things that he
has been able to generate
using a large language
model as an assistant. Right.

(01:18:34):
That has been wonderfully empowering.
And that's a beautiful positive use case
where used, well, yes, children can
benefit, but they have to
understand how to use the tool.
Managers, any person who's driving a
keyboard for any type of work can benefit

(01:18:57):
from it, and it can become a force
multiplier in that regard.
But again, it's
knowing how to use the tool.
And what I foresee is that same kind of
rapid evolution that happens anytime
there's a disruptive innovation.
So when cars were first invented, there

(01:19:19):
were many different ways to conduct one
of these horseless carriages.
And it was only after years of evolution
and trial and error that
things started to settle down.
And now cars are,
they're almost appliances.
You don't even hardly need

(01:19:40):
to know how to drive one.
Right.
Or build it or maintain
it or anything, right?
We are at the very, very beginning cost.
And I guarantee 10 years from now, what
we're doing with this technology will be
almost unrecognizable compared to what

(01:20:02):
we're doing with it now.
Yeah.
Yeah.
Yeah, I,
you know, we were talking about, you
know, reporting and all that.
And like, my concern is that people will
assume that we're just generating a bunch
of stuff based on AI and that they're not

(01:20:23):
getting their money's worth.
How are we going to continue to provide
value when, you know, companies can say,
"Well, you sent me something that I can
generate myself "in
chat GPT or what have you."
Is it good prompt engineering, you know,
or are they spending money on our prompt

(01:20:43):
engineering and
getting the answers there?
Is there still going to be a need for,
you know, people to pop shells and do
physical pen tests or, you know, "Hey, I
can download Collie and White Rabbit Neo
"and do everything that you wanted to."
How are we going to continue in InfoSec
showing our value when people will just
want whatever's quick, fast and easy?

(01:21:08):
There isn't an easy answer to that.
And I absolutely know that there are
people out there who will be able to
leverage AI tools to do a lot
of fake it till you make it.
But in the process, I think that they
would probably learn a lot along the way
if they're paying
attention to what they're doing.

(01:21:31):
On the other hand,
people who actually
know what they're doing
will be able to focus more on doing
better at the hard part.
And most recently with BHIS, I've been
working for most of the year with the

(01:21:52):
continuous pen testing group.
And that team being immersed and around
and watching the things that they do and
the way that they think is totally
different from how normal people operate.
It's a completely different come from.
And it's not the kind of thing that can
be readily taught without a community
immersion experience where you're getting

(01:22:15):
a chance to see people.
So a live gathering with a live CTF where
you're working with a team, that would be
the closest thing to being able to share
information the way they do and to get
that kind of alternate perception.
The LLM isn't gonna give that to you, no,
but it can also help you learn so that

(01:22:38):
when you go into an environment that
you've got more to bring to the party.
Right, right.
That was totally rambling, I know.
No, that's great.
That's great stuff.
So we actually have a couple of questions
from our Twitch chat.
And I do apologize if you
were watching us on YouTube
having some issues with encoding errors.

(01:22:59):
Symbol casting didn't work so well.
So I didn't wanna have a
bunch of drop packets there.
So somebody said, wave AJ,
thank you for being here.
First time joining.
As someone who's looking to transition
into AI risk roles, specifically in GRC,
what are some tips on finding a role?
One thing I've been doing is training and

(01:23:21):
awareness training in senior homes about
deep fakes and smishing.
So it sounds like a wave AJ is working to
help people be educated about
misinformation or being scammed.
So are there roles out there in GRC or I

(01:23:43):
mean, or risk roles for GRC about this?
If you know.
I'm not a GRC expert.
(Laughs)
I am not a GRC expert myself either.
I would imagine that there's gonna be
some governance roles
about AI, the overuse of it.

(01:24:06):
I would say you might need a lawyer and
degree for that or you're
gonna need to work on that.
I would almost, there
has to be GRC for this.
It's just, it's moving faster than GRC
can keep up as we've seen.
I was gonna say, I don't know that they
have updated anything to catch up with

(01:24:26):
how fast AI has hit the market.
Right, right. Yeah.
I know even things like PCI was a little
behind the times on passwords for years.
They were still touting up or lower
number special eight characters when it
was like, yeah, maybe
that shouldn't be a thing.
So yeah, I would say, yes,
there will be jobs for that.

(01:24:49):
We're definitely on
the breathing edge. Yeah.
I went to the SANS AI Summit
that just happened recently.
I heard our good friend, Mick Douglas,
who was also there, asked a few questions
of the speakers in the audience, but they
had pushed out some
guidance at SANS about AI security.
There may be some governance bits in

(01:25:11):
there as well, but I think the,
one of the things that I wanted Bronwyn
on for was because a lot of
companies are wanting to use AI.
They just don't want to use an AI they
don't control, like the
new cloud, if you will.
So they don't want to put their entire
business product roadmap
inside of chat GPT and then say,

(01:25:31):
what do, how make,
please give me a billion dollar idea.
And then chat GPT is, implied or no,
there's a thought that chat GPT would
take that information and train it so
that the next person who comes in and
says, hey, what do, you'd be like, well,
custom, you know, your competitor has
uploaded their product, you know, line

(01:25:53):
for the next three years.
And this is what I told them to do.
So actually it isn't so much having that
information be used to train the next
generation, all of the, all of the
providers of say, web-based or app-based
LLM use, whether it's cloud or chat GPT,

(01:26:14):
open AI or one of the others, if you're
having to send something, sound or text
off to a server and then have it come
back, there are many, many opportunities
for that conversation to be interrupted
and for that information to be leaked.
And unfortunately the usual suspects are

(01:26:37):
still the most reliable suspects as far
as security for that.
But then assuming that the in-transit
conversation remained intact,
most of them, I know open AI and I'm
pretty sure anthropic both retain the
conversation and its contents.

(01:26:59):
So the entire
interaction for a period of time
and then depending on whatever their data
retention policy is,
theoretically it will be deleted.
Right.
So for however long that
data is at rest, it is at risk.

(01:27:20):
Yeah.
So I'm not necessarily concerned so much
about the providers like open AI,
anthropic and so on, but it's also about
the fact that they are very big targets,
very tasty targets because of the volume

(01:27:41):
of data that they are handling.
And if somebody gets persistence inside
their network, they've got it all.
Right, right.
And I mean, we've seen an example of that
deep, what was that, not
deep fake, it was deep seek.
Yeah, deep seek.
The Chinese one.
Yeah, there was some
suggestions in the news that,

(01:28:03):
deep seek or somebody came in and said,
hey, they did some speed dating and they
distilled the AI from chat, it was chat
GPT, one of those big ones and they
basically distilled everything to the
people who wrote deep seek over there.
And it's possible, I mean, it's code, so

(01:28:24):
it's written by people who don't
necessarily think about security, which
is why I don't think security will ever
go away because there
will always be bugs in code.
Hallucinations will always still exist
and input validation is still a thing.
So yeah, that's why I
don't think it'll go away,
security wise.

(01:28:46):
But yeah, it almost feels like we're back
to the on-prem versus cloud thing.
It's like when people, back in May day,
people didn't wanna move to the cloud
because the cloud was
somebody else's computer.
So everything was
supposed to stay in house.
Now we're back to AI and it's like, well,
we don't wanna go out to the cloud, we're
gonna stay in house.
And there's still a lot of companies that

(01:29:07):
are still wanting to do that because they
have a lot of information
that they can't collate easily.
And so they're wanting to create local
AIs that can like a rag, like a retrieval
augmented generation, which I have some
links in our show notes for that.
And you can set those up.

(01:29:29):
And that's one of the reasons, one of the
many reasons I wanted to have you on
Bronwyn, but, cause your workshop at B
side San Diego, and I think you did at
layer one and at hope,
forgive me if I'm wrong.
Layer one hasn't happened this year yet.
Yeah, oh, it's next month.
That's right.
It's coming up in May.
It was scale that I was at.
That's it.

(01:29:49):
Scale.
But I mean, I talk at layer one all the
time and I've already submitted a repeat
of the local LLM talk that I gave, well,
no, I did the workshop at B side, but
it's the same talk that I gave at scale.
I'll be giving again, assuming that layer
one picks it up. So,

(01:30:10):
rag is very exciting and especially for
individuals or organizations that have
security concerns for whatever reason.
They want to have systems
that are under their control.
And the reality is with a fairly robust
gaming system, I can do a tremendous

(01:30:31):
amount just on my own.
Give me a couple of extra,
you know, instead of one video card with
24 gigabytes of video RAM, get me five.
Right.
And then let's figure out a way to put
them into however many systems necessary

(01:30:53):
to utilize them all and then go to town.
With that kind of power, the things that
you can do are just awe inspiring.
I was able to train on my old system.
Last Christmas, I gave myself a new more
powerful system with GPUs and my old
system having just eight cores.

(01:31:16):
And okay, it did have 64
gigs of RAM, but no GPUs.
That system still managed to train
hundreds of thousands of images.
This was for my capstone project for a
class I took last year.
Take images of red blood cells.

(01:31:37):
Some have been infected by malaria.
Some are not infected.
And so you've got your training data.
You've got your testing data.
You use that, train a brand new model
by combining an existing model and adding
this additional training so that it can
recognize infected red blood cells, the

(01:31:58):
positive potential impact
for something like that.
To be able to save lives through training
something that I can
train on my desktop. Right.
That's the type of innovation that we
have the ability to do today.
Right.
So- So my other question is like, what
other cool things have you seen for

(01:32:20):
innovation that you just
like really geek out over?
And I love that one.
That's fantastic.
Just to give an idea too, imagine the
number of X-ray records an established
dentist's office would collect over a
period of, let's say just five years.

(01:32:43):
Now run that set of data.
Through a training process.
And it's not for anybody, I mean, okay.
Yeah, I have my bachelor's and master's
both in cybersecurity.
I've been working with computers a long
time but I am not the
most wonderful programmer.

(01:33:04):
I am not the most
exotic geek on the planet.
There are other people out there who are
smarter than me who can take this stuff
and run with it using off the shelf
equipment and information that you can
get just by learning how
to use Jupyter Notebooks.
Right, right.

(01:33:25):
The potential for
grassroots innovation is huge.
Right.
And that is something
that hasn't been brought up.
They want you to always go for the
biggest and the most expensive.

(01:33:46):
So that the ones who are peddling all of
these services they want you to stay
addicted to their big data and ignore all
of the data that you have already.
You said yourself,
Brian, you're a data hoarder.
How many decades of data, how many
different training opportunities, how

(01:34:08):
many different specialist systems do you
think that you could train with data that
you already have in your possession?
Oh, I've got a couple of 20 terabyte
drives sitting over there as
backups with stuff on them.
I've got, here's a
couple of terabytes here.
Here's a terabyte.

(01:34:29):
I've got four terabytes in this.
So.
Okay, you have volume.
Do you have organization?
No, no organization whatsoever.
Right, no, not at all.
And some of that is why I was like hoping
the rag stuff would help
out because then I could just.
Maybe AI could help you organize it.
Yeah, exactly.
(Laughing)
Brian, you have 3 million of this file

(01:34:50):
and you should probably not.
I did talk to somebody at B-size Charm
that was training local AI or
LLM on all of the Enron data.
Oh, wow.
Right?
I'm like, oh wow, interesting.
Yeah.
Well, I mean, in that case, any large

(01:35:10):
data set, I mean, you
can do the WikiLeaks stuff.
You could do the, you know, you could
look through the Snowden docs or whatever
and find stuff that people might have
missed or inferences or
analyzing data that, yeah.
You know, maybe use it for
the US budget if we have one.
So.

(01:35:31):
We do, it's.
Yeah.
It'll probably be hidden.
Yeah, yeah.
(Laughing)
So one of the things that I learned
during your workshop was that prompt
engineering is important and that AIs are
only as good as, and I kind of call it
the new Google dorking because if you
don't know how to ask a question to the

(01:35:55):
AI, you're not gonna get that answer.
And one of the things that I've started
doing is giving it a scenario like you're
the greatest dungeon master ever because
I'm asking a lot of RPG questions.
I'm like, you're the greatest dungeon
master of, you know, Wizards of the
Coast, 5E, what's the ruling for this?
And you know, it does pretty good.
And so one of the things you told us

(01:36:16):
about was Daniel Meissler's Fabric
GitHub, which is about a bunch of
examples with regard
to prompt engineering.
Why is prompt engineering, I've given one
example, but prompt engineering is, I
just scratched the surface.
So maybe you could give us some more
examples of why prompt engineering is so

(01:36:37):
important in this respect.
Well, this is where any previous
knowledge of working with computers, if
you've done anything with database
queries and having to extract through
Google dorks or some other method, a way
of crafting something so that you can get
the information you want out, that will

(01:36:58):
be useful experience for you as you learn
prompt engineering because
it's basically the same thing.
It's learning how to provide enough
context, enough specificity, specific
directions, and yet also allow for a

(01:37:20):
little open-endedness.
And it's about knowing
what the question is.
I mean, 42, man, come on.
It's the ultimate answer,
but what's the question?
And without knowing the question, of
course you're not gonna
get anything meaningful.
So it's exactly that same conundrum.

(01:37:43):
And this is where one of the things that
you have to pay attention to if you do
start downloading and using your own
models is an instruct model is not going
to give you the ability to have the kind
of ongoing conversation where you start a
brainstorming conversation.

(01:38:04):
And again, this is another way that LLMs
are wonderful is that I can have a
brainstorming conversation.
I know kind of where I wanna go and I
can't get it out of my brain.
So I use it as a sounding board.
And what about this?
I thought, no, I've got this other thing.

(01:38:24):
And then eventually, boom, now I have
something I can work with for whether
it's writing a blog post or figuring up
another talk or
something related to work.
It's just about
finding that proper use case.
And that sounding board is
another excellent one too.

(01:38:47):
Okay, so you mentioned at least two or
three different models.
Oh God, yeah.
So you said the
brainstorming one and then there's the,
how many are there?
I mean, is it-
Are you talking about
techniques or ways to use models?
Cause let me see if I

(01:39:08):
can find that slide deck.
On one of the slide decks I have for a
previous talk, I break down different
kinds of interactions,
different kinds of prompts.
And so in the prompt engineering talk, so

(01:39:29):
the simplest is like a one-shot question.
What is the speed
velocity of an unladen swallow?
And very, very straightforward, very
simple, very Google-esque if you're used
to just typing things
randomly into Google.
Then you can provide something that is a
little more specific where,

(01:39:50):
okay, I have these 30 lines that have
this word and I need to have it replaced
to do something else.
You can do it like that if
you don't know how to use regex.
Right, okay.
If you want to get the most bang for your
buck, however, you wanna have a prompt

(01:40:13):
that has complexity and provides context.
And this is where Daniel Meisler's
patterns, the patterns that are part of
the fabric tool are a
masterclass in prompt engineering.
And in no small part because there are so
many different prompts in such, I don't

(01:40:36):
know why my camera is being so weird.
Sorry about it zooming in and out.
It's driving me crazy too.
That's okay.
What I get for a fancy AI-driven camera,
would you just sunbend please? Anyway,
the prompt that will give you the most
bang for your buck is one where you say,

(01:40:56):
as you mentioned
earlier, you give it a role.
This is your identity.
This is who you are,
your reason for existence.
This is everything that matters to you.
And in as long or short a statement as
you wanna do, who are you?
Then what is the task
that we are going to attack?

(01:41:18):
What is the goal we
are trying to achieve?
So it can be something simple like
putting together a quiz
with questions and answers or refining
the output of some JSON tool.

(01:41:40):
You're getting a JSON file and you wanna
have it cleaned up or turned into
something else and being
able to go through that,
figuring out how to get
a script for something.
I mean, there's so
many different use cases.
And then if possible, give an example of
what that will look like in the output.

(01:42:00):
So for going back to the quiz maker,
which is one of the examples that we
train an LLM to do in
the workshop I give.
Here's an example of the actual structure
of the area of, God, I'm losing my words.

(01:42:23):
The area of specialty, the focus that
we're going on,
subject matter, there we go.
Subject matter, question, answer,
question, answer,
question, answer, whatever.
And then on to saying
now, take the input.
All of that content can go into a simple
markdown file and on

(01:42:44):
Fabric there are dozens.
So if you want something to review
academic papers or help with your
calendars or a Pilates schedule, I mean,
you can set up a model to do
just about anything you want.
And then you combine that.
And if you're working with Olama, it's a

(01:43:05):
one-line command to combine the model.
That's not, if you haven't downloaded the
model, you need to download it first.
Okay, do this, create, here's the model,
here's what the new
one's gonna be called.
Here's a text file that
you should import into it.
And boom, it's done.

(01:43:27):
Truly, truly accessible to
a large number of people.
Right, you just have to know how to get.
So there's always refinement, right?
So if you don't get what you get the
first time, you say, well, this is not
exactly what I'm looking for, I'm looking
for a little more of this
until you get the right answer.

(01:43:47):
My problem is how do you know you've
gotten the right answer?
How you know it's not a
hallucination in this case.
That's where a little bit of subject
knowledge is helpful or there are always
ways, how can you validate whether a
NESSIS scam is pulled up a legit
vulnerability or a false positive?

(01:44:07):
Trust would verify, right?
So in that case, for me, I would not
bother to ask the LLM if this was a
legitimate issue from NESSIS.
I'd just go directly to
the verification process.

(01:44:28):
Well, okay, but the LLM may
give you a place to start.
And once you've created a new tool,
if it has demonstrated reliability,
then you give it more trust.
I see, okay.
So for example, I did something similar
when I made my CVE

(01:44:49):
summarizer on chat GPT.
Let me find the link for you.
It's around here somewhere.
Timely, by the way, the CVE's the
summarizer, you know, after all the stuff
that CVE has had, MITRE has to go through

(01:45:10):
in the last couple of weeks.
Oh, sorry, Lou.
I'm glad they created the foundation
because now they can not, you know, be on
the teat of the US government and they
can, you know, start expanding that out.
So, but yeah.
I do find that when you do it

(01:45:30):
for like research and stuff,
having it specific, like I asked a lot of
times for it to specifically give me
exact locations of where it
pulled the information from.
Okay, so site your sources.
Yes, because that's helped a ton.

(01:45:50):
Because even if you use like the Google
thing that pops up the AI thing on the
top, like even the links
that they link to there
are kind of hallucinations, right?
Like they- They can be.
They can be sometimes they are.
Like touch on a topic, but has nothing to

(01:46:11):
do with what you asked.
Right.
All right.
Mr.
Betcher's having some internet
connectivity issues.
So he'll join us back as soon as he can.
Yeah.
So in chat, I shared a link with the CVE
summarizer and this is a beautiful
example of a force multiplier.

(01:46:34):
So when I was, I think I was only one of
two editors that we had at BHIS and we
were getting 15 to 20 or
more reports each per week.
And having to go through a lot of reading
and a lot of attention to detail.
And sadly, one thing that testers will

(01:46:56):
sometimes do is they will name drop
anywhere from one to
five CVEs with no context.
Oh, I see.
So this was like, oh, you know, we
discovered this
vulnerability much like 2025, 33, 33.
And you're like,
uh.
Okay.
Yeah.

(01:47:16):
Like, you know, we found
an LFT and RBJ and whatever.
Okay.
What are those acronyms?
Hello, yeah.
Yeah. Okay.
So I made the CVE summarizer for me.
Oh, yeah.
So that I would know, I wasn't using, I
wasn't doing it for them.
I wanted to know what does this, they've

(01:47:38):
named dropped a CVE.
What the frack is the CVE doing?
Right.
What is it about?
And I would go initially to NIST or to
MITRE or to wherever and I'm sorry, those
things were absolutely
written by engineers.
And that's not the way my brain works.

(01:48:01):
I am closer to human than that.
So I wanted to have, and I know for a
fact that something covered in one source
may not be covered in
another and vice versa.
So if I could get a summary.
And I told my summarizer to go and
basically figure it out.

(01:48:23):
And it gives me, here's the identifier.
If there's a popular
name, I want that information.
And I have it in a very
specific report like format.
And matter of fact, I be raised it when
it doesn't perform
according to its script.
So these are the ones that actually don't
say embargoed or like reserved or

(01:48:45):
whatever that they, hopefully they're
like, you know, this is dirty cow or MSO
8067 or something
that's fairly well known.
It goes back and forth when I
first made the CVE summarizer.
One of the things that I specifically
asked for are known exploits
or reports of recent attacks.

(01:49:09):
Is it actually being used in the wild?
Are there any groups of
concept, that sort of thing?
And at first it wouldn't
give me links to any POCs. None.
And then- Well, that kind of makes sense.
It's like, you know, are you making a
long-time cocktail kind of thing?
A couple months later now

(01:49:29):
it's giving me those links.
Oh, is it because it trusts you now or?
It goes back and forth.
It's really weird.
I don't make a lot of changes to the
custom GPT, the chat summarizer, because
I want, it works well.

(01:49:52):
So as long as nothing on the other side,
namely open AI side gets too mangled, it
should be fine just as is, right?
And I can always clone it and make
another one, play with it if for some
reason it isn't doing enough, but it
works perfectly for its intended task.
Nice, okay.

(01:50:13):
Could that be because there's enough
people maybe using the similar AI asking
for the same vulnerabilities and after a
while it's like, okay, there's enough
people asking, maybe
I can start, you know.
Yep.
But this is where if I wanted to have a
better rounded cybersecurity assistant,
say, you know, create a Jarvis with a

(01:50:37):
more narrow focus and maybe call him doc
something or other, I figure some name up
for him, but have
something where it can use RAG.
So any newsletters,
if I were to automate something, just
total brainstorming, if I were to
automate something, I would take every

(01:50:59):
cybersecurity newsletter that I
subscribed to and capture that text and
create a RAG stack from that text that
could be used by an AI that I interact
with on a regular basis.
And then at some interval, I could

(01:51:21):
condense that data and use it as
additional training for a model.
Right.
And when you do that, it's much more
intense because you have to pre-process
the data before it can be fed into the
model and I'm not gonna go into all it,
but if you really wanna go deep down the

(01:51:42):
AI, make your own model rabbit hole,
you're gonna have to learn some data
science, which means not only getting
your math to be better, but pay some
attention to statistical
analysis because you will need it.
I have a data wizard at work.
She is amazing and I am not good at that.

(01:52:07):
And she knows her way around a SQL query.
She has,
yeah, it's truly, she's a
data wizard and I call her that.
And yeah, I mean, I've been trying to
learn that stuff, but
yeah, my math is not so good.
I'm still learning with
an abacus and stuff, so.

(01:52:27):
I have to tell you that like, statue BT
and anthropic has made my
SQL skills infinitely better.
Oh, I love it,
especially, so we use like Hadoop.
Well, we use a lot of shit,
but Hadoop is one of them.
So HiveQL is one of those.
So it's like, you feed at the table in
the fields and you say, well, I need this
field and this table and this table and I

(01:52:50):
need to make a thing
that monges them together.
It's actually better
than, because I have no clue.
So I just run it and then hope it works.
I have no idea.
I vibe, you know, querying.
Your vibe coding.
I vibe querying, yeah.
Because I have no idea if it's going to
work or not, but it's
non-destructive for me.
So I just pop it in and I go, oh, it

(01:53:11):
doesn't work or it's trying to do blah.
And I say, hey buddy, it doesn't work.
You know, here's the
error message I'm getting.
Maybe you should try again.
And it's like, oh yeah, sure, no problem.
Here, try this.
And then, as we're moving along, I
eventually get what I need.

(01:53:31):
But yeah, it's, yeah, I guess in that
case, I'm vibe prompting because I don't
know HiveQL for nothing.
But it's work, it works.
It's actually better than Avid.
Like after the second time
and then it figures it out.
Cause it's like, you know, I need to, you
know, pull this field from this one and,
you know, do a fuzzy match on these

(01:53:51):
fields here and make sure that they work.
And not bad, it's not bad,
especially for the SQL stuff.
So, you know, I don't know if you want to
do this, but one of the things that I did
kind of suggest that was
we would set up a local AI.

(01:54:13):
And I don't know if we've got time for
that, but I mean, we can go quick.
I can do that.
Okay.
No, I can do that.
Share the screen if you want to.
What's that Ms.
Berlin?
I have to go to bed eventually.
I get up real early.
Okay.
If you need to drop off, that's fine.
We will continue on.
I know you have to get up early.
Mr.
Betcher is still working on his internet.

(01:54:35):
But if you do need to drop off, you know,
just say when and we'll let you go and
we'll continue on, okay?
All right.
Okay, cool.
Nice to have met you, Amanda.
Yes, you too.
I've seen your name
around, always good things. Yep.
Have you, yeah, I'm
sure be working at BHIS.
You'll be at some of
their conferences and stuff.

(01:54:55):
So I'm sure I'll-
No, and again, I'm debating whether or
not to make the annual pilgrimage to
South Dakota this year.
We'll see.
Last year was my first time I went.
I love that.
I mean, it's a wonderful thing.
It's a wonderful place.
It absolutely has
local character and charm.

(01:55:17):
And- I want to go to that because my wife
wants to go to Devil's Tower.
The problem is it takes two flights
regardless of whichever way you go.
I have to go through Salt Lake or I have
to go through Denver.
Oh no, it's so remote.
I do not drive.
Hi, you're probably
closer to you than it is to me.
No, thank you.

(01:55:37):
Well, not anymore.
Now you're down at the
bottom of the country.
Yeah, well, if I was in Seattle, I would
fly from Seattle to Minneapolis to
Deadwood to Rapid City or I'd have to fly
to Seattle to Denver
or Salt Lake up there.
So either way I'd be traveling in the
opposite direction of
where I needed to go.
But San Diego, we'd fly San
Diego, Denver to Rapid City.

(01:55:59):
But yeah, the wife wants to go to Devil's
Tower, which you go there and
it's still a three hour drive.
No matter which airport you go from, it
seems to be right in the middle of about
three different airports you could go to.
But yeah, I would love to go to far Wild
West hacking and then there's one in
Denver similarly, I think.
While high hacking.
While high hacking fest, yeah.

(01:56:20):
They're trying to have
it more than one location.
We've had Deadwood maxed out.
We can't accommodate more
people than we already do.
The Holiday Inn
Express is down the street.
Probably get another 20
or 30 people in there.
They're all filled.
We filled them all.
Holy God.
Wow, okay.
We filled the entire town.

(01:56:41):
You have to like,
when we've come to town.
Are there campsites?
Camp sites, you could
put a tent up or something.
I know a guy who ran a
camping infosec conference once.
So, you know, got a
little experience in that.
So, you know,
anyway.

(01:57:02):
All right, let me get
something set up here. Okay.
Do our own thing.
So while we're doing this, Wave AJ says,
"Besides AI, what are y'all's thoughts on
emerging technologies like quantum
computing down the line also kind of
connect with you guys on LinkedIn?"
Yeah, sure, you can
connect with us on LinkedIn.
Just look us up and then connect.

(01:57:24):
Quantum computing, I mean, right now
we're just trying to get post quantum
encryption keys and stuff in
things like SSH and whatever.
Now I'm just trying to get people's like
exchange servers off the internet.
I don't know, you know, quantum computing
is a far stretch for me.
Yeah, again, that's gonna be one of those

(01:57:45):
ultra rich things, probably
used in a lot of research.
I don't know how we will
use it in security yet.
Nobody does.
I would say the first teams that will use
it will probably use it to try to crack
passwords because that's what they do.
Yeah, they won't use it for pen testing.
Yeah, hacking, yeah,
yeah.

(01:58:06):
You know, would you like to play a game?
No, I do not want to play a game.
The only way to, I do
not want to play that game.
The quantum computing AI?
Yeah, oh,
with the blockchain, add the blockchain.
Make sure you get
that blockchain in there.
Yeah, I

(01:58:29):
don't, I- Sorry, I'm just
getting things set up here.
It's okay, I rarely give myself dead air.
So I'm allowing myself to grow as a
person by having a little dead air
because I am an hour and a half of
nothing but speaking when I'm on my
streams on Tuesdays and Fridays, so.
We could just sit here
and stare at each other.

(01:58:50):
Yeah.
Okay, I'm gonna have an awkward pause
like Craig Ferguson used to do.
Do you, at work we do a seven second, we
have a seven second rule.
If someone can count to seven seconds and
no one says anything, we're done.
Really?
Okay, no questions, we're done.
Nice, yes, allow, there we go.

(01:59:13):
Yay,
I had to ask for-
That's right, I got this mother locked
down because I'm security and stuff.
And stuff.
And stuff.
All right, so what we have here, let's
see, will it follow?
You got a terminal window up there. Okay.

(01:59:33):
Your Star Trek fanfic is
in the other one, I'm sure.
(Laughing)
All good, nope, this is the one I wanted.
So on this I have, I have Olama already
installed and I already
have several models created.

(01:59:55):
Okay.
So let's see, let me double check and
know they probably haven't been updated
in the past two weeks or
if they have, it's fine.
So the three here at the bottom,
these are off the shelf provided by
different LLM players.

(02:00:18):
So Fi comes from Microsoft, Mistral comes
from the Mistral organization.
Llama is I believe a meta derivative.
Right.
Playing with those is relatively safe
compared to others that you might
download off of Hugging Face.

(02:00:40):
So then from those, I made a
Daffy Duck and a Quizmaker.
So just to show you
how difficult this is,
let's see.
Wait, you made Daffy Duck and Quizmaker?
Yes.
Wow, okay.
I did.
Nice.
So, and it's really, you know what?

(02:01:02):
I'm gonna have to
change the sharing here.
Hold on a second. Okay.
Because I wanna be able to
show more than one thing at once.
Okay.
And I think I know,
hard as I rearrange my entire desktop.
That's okay.
So yeah, while we're doing this,

(02:01:23):
olama.com, you can go there and download
the application that Bronwyn
is using in the command line.
Now, olama.com is a little
different in terms of software.
You have to be familiar with the CLI or
the command line interface, how you do
this, because there's no UI.
You have to use commands through that.

(02:01:43):
One thing that I find annoying about
olama is I would really like for it to
tell you what it has available because
you have to move the CLI out of the way,
go to olama.com, search through the
models there, and then go and download it
through the CLI, which I find stupid.
They should be able to go
query what all is available.

(02:02:04):
And then it's like, gives you a list,
like, you know, from a menu.
And then you go, oh, I'll take, you know,
number 97, you know, and it'll download
number 97 for you, but it doesn't do
that, which I find glaringly obvious
feature to put into the
command line, but that's just me, so.
That was fairly easy.

(02:02:24):
Look, I work with people that have fucked
up with the code
ticketing systems, so, you know.
Yeah.
Don't even get me started.
Jira?
No, we have that.
No?
No, it's homegrown.

(02:02:44):
Probably shouldn't say anything.
Figure out how to get this.
No, that's not what I want.
Argh, I'm trying to
share two screens at once.
And I think I'm going to just cheat.
Like, no, it won't.
Get SDAI.

(02:03:05):
I have a 49 inch monitor.
And I'm used to being abused.
We're just all
getting our rulers out now.
We're gonna measure our monitors.
All I hear is people complain all the
time when they have these big, I mean,
they look great, but anytime you have to
do like advanced screen

(02:03:27):
sharing, it's not the problem.
So other than
cost, that's why I don't have-- Is it one
of those like, you know, half, you know,
things that come around,
like, you know, command center.
I got two 32 inch Dells right here that
are pretty damn nice, but, you know,
those were not bad, so.

(02:03:47):
All right, let me try this.
Hold on.
Oh, you know what?
Now that you-- Wait, it says hold control
to select multiple windows.
I may have just learned
a new Zoom trick. Yes.
No way.
Okay, so if this works properly.
I wonder how it displays.

(02:04:08):
Okay, so there's one, hold
control, select the other.
I don't use Zoom enough to know, so.
I use it all the time.
I will send you a screenshot.
What do you see now?
No way, it's very small.
I see you and I see
the terminal and I see,

(02:04:29):
what can I help you with, yeah.
Let me see if I can, nope, nope, nope.
But you're a normal size.
I don't know how you see it, Brian.
Oh, oh.
Is that better?
What they're trying to do is do like OBS
like stuff where you've
got the guy in the corner.

(02:04:50):
Yeah, yeah.
I don't know if this is
Zoomed in enough that you can see.
Yeah, I can see it.
Yeah, I can see it.
All right, cool, all right.
So what I wanted to do
was to be able to have
the ability to show stuff in a web

(02:05:10):
browser and also send commands because of
course, what would experimenting with
Toys-B without having to go to GitHub to
look stuff up, right?
That's right. Okay,
so, Olama of course is the tool that I'm
using and when you go into the download,
depending on your operating system, you

(02:05:33):
can get access to different executables
except for Linux people who they use a
one-shot command to
pull in a shell script.
One of the people at Bsides commented
about the degree of trust to download and
immediately launch a
shell script from a form.

(02:05:56):
Yeah.
See, no one cares anymore.
I know, I know.
I know, I know.
I talk about this stuff all the time
where that's one of our install things
for people we onboard is just this
PowerShell script that
downloads and runs stuff.
Yep.
And as an admin, I would have never ever
done that and now
people do it all the time.

(02:06:17):
Yep.
So I'm going to go
ahead and just do a quick,
actually I shouldn't do this.
I already have Olama installed, if I
wanted to double check and make sure it
was the latest version, I would run this,
but if I run this, it'll wind up making
this longer, so I'm
not gonna do that. Okay.

(02:06:37):
Because it'll take several minutes and I
don't wanna abuse people by making them
sit through it, download involuntarily.
So like I said, the command is nice and
then when you go to the GitHub repo, once
you scroll down past all of the files and
get to the readme content, it has a lot

(02:06:57):
of information including that same
command and you can go deeper into the
installation process if you're one of
those people who has to have granular
control over every single thing that you
do to your systems and
no, that is not a dig.
I admire and respect people like you.
No, Gen 2 people are wrong and they,

(02:07:19):
they, they're, they're, what's wrong
with, you know, Mary, anyway.
(Mary Laughing)
I'm only, I'm only about 33%, you know,
true on that, but yeah.
Okay, so it's there for you and then you
get down and further on describes a lot
of different popular models and what the
actual download command is and then

(02:07:41):
finally way down here, we get to the
instructions on how to customize a model
and it really is not that difficult.
First, you have to
pull a model to work with.
I have three off the shelf that I can use
and then the instructions for how that

(02:08:03):
model are going to be customized are in
what is called a model file and I
believe, let me take a look here.
Oops, I didn't say I didn't do that.
Help if I could type.
Okay, so let's do,

(02:08:31):
so I have two model files that I've
already created, the Daffy model file and
the QuizMaker model file and both of them
are essentially following this recipe
here and the first line in the file is
from Olama 3.2 so the first line of

(02:08:52):
instruction is the from statement.
What model are you going to use as your
foundation upon which to build?
Next is this line
which is an inline comment.
It begins with a hashtag or for those who
are more interested in Esoterica

(02:09:13):
Octothorps. Octothorp.
Yay, Octothorp.
And this says you want
to set the temperature.
Now the temperature
determines the degree of randomness.
It's between zero and one, not Boolean.
It is not Boolean, it is a range.

(02:09:34):
So a temperature of 0.7 is perfectly
legitimate but one is the maximum.
You track me?
Right.
Okay, so if you have the temperature set
to one, you will get more
variety in the responses.
If you set it closer to zero, it will be

(02:09:55):
basically the same prompt will generate
the same response every time.
Okay.
I mean, and I've seen
instances working with different
products that are available online.
Some of them are more creative.
They allow for more

(02:10:16):
randomizing than others.
And I leverage that.
Their time is when I
want it to be less creative.
Then everything between this triple
double quote, yes, it's
three double quotes in a row.
Right.
And it brackets and everything between
that is the instructions for the system.

(02:10:37):
Now their example is you are Mario from
Super Mario Brothers.
Answer is Mario the assistant only.
So let me clear things out here.
Oops, would help.
And let's do a quick cat.
And I believe I had.

(02:10:58):
So here's the contents of
the model file for Daffy.
I'm not a big Mario fan.
He's okay, but he's not my favorite.
So I went with Daffy.
And then, so once you have your model
file and I'm following the exact same
formula for from the temperature, I

(02:11:20):
lowered the temperature a little bit to
demonstrate that it's a range, not one or
but between and including one and zero.
And then the instructions.
And then here is the command
to actually make a new model.
So let's go here.

(02:11:41):
So I already have the model made.
So it's just gonna recreate it.
So a lot of create.
And since I'm calling mine Daffy,
I'm gonna make it Daffy too.
So that way we have a
Daffy and a Daffy too.

(02:12:01):
Fantastic.
So this is a brand new model.
And let's see.
So right, tack F.
Yes, there is a file coming and it will
be in a llama and it is the Daffy.
And I love tab auto complete enter.

(02:12:23):
Boom.
So that looks a lot like a Docker
container kind of output.
One of the things I've always looked at
when I download these things is it has
very similar to a Docker output, but it's
not running Docker in this case.
You don't necessarily have to have Docker
running on your box for this to work.
So how's it getting away with that?

(02:12:45):
I don't know enough about how it works
under the covers to answer that question.
But I would not be at all surprised
because I know it is also fairly easy to
have a llama running
inside a Docker container.
So if you or somebody else happens to

(02:13:05):
like working with Docker, this is another
opportunity for you to use both the LLM
and Docker at the same time.
All right, so yeah, it was that easy.
Now granted, I've got a thread ripper,
I've got GPUs, I've got a respectable
amount of RAM, but my system is just a
mostly off the shelf gaming system.

(02:13:27):
It's not super, it's not the
whopper, it's not that big.
So now let's run.
Actually, let me clear first.

(02:13:51):
Audio cuts out)
Yeah, one of the things when I was
looking at my, looking at models that I
could run on OSX because it
doesn't have Nvidia, right?
So you're more limited to the models
because if you're using one that only
uses Nvidia GPUs, they may have a CPU
like downgrade, which will slow me down.

(02:14:15):
But yeah, it's interesting to see that,
it's pretty limited to anything that
doesn't run in Nvidia.
So even AMD was somewhat limited, even
though it has a lot of
the same instructions.
Anyway, so you can see the
Appy model is doing stuff

(02:14:36):
in character, which is nice.
I've seen people use a
combination of a llama, a pie,
and a small screen and a book to craft a
never ending storybook.
Oh.
So the story just keeps evolving and

(02:14:56):
evolving and evolving
and it's all fed by an LLM.
Huh, well, that's interesting.
I'd have to find that YouTube video.
It was the maker who did it just did
amazing work elsewhere.
So when you're in the llama interface, if
you have questions, the
slash is how you write a command.

(02:15:17):
The question mark says
it's the equivalent of help.
And these are the most common.
So set session variables, show model
information, load, buy, we'll exit out of
the interactive mode, or you can begin a
multi-line message using three quotes.

(02:15:38):
So I'm gonna go ahead
and exit out of this.
And let's do a quick look and see how
many models we have now.
Okay, so yes, Daffy.
Daffy, Daffy too, yeah.
All right, so let's do the

(02:15:58):
same thing with Quizmaker.
So we'll start off, I'll up
arrow till I get to the cat.
Only this time instead of
Daffy, it's the Quizmaker.
This one is a little bit longer because
it has more content.
And this is, I shamelessly admit to

(02:16:21):
stealing content off of Daniel
Meisler's Fabric GitHub repo.
Go there, explore the patterns.
This one was lifted from it and I use it
to create an LLM that will
specifically help write quizzes.
And you can see from the model.

(02:16:45):
I left the temperature at one, I wanted
it to be as creative as humanly possible.
The system starts here and it
goes all the way down there.
Wow, okay.
So this is why the point about how

(02:17:05):
knowing prompt engineering will be
massively helpful to you.
Right, so that's a whole
application written in an LLM.
Exactly.
Right.
Because, and as you work with it, you
will refine it until it meets the needs
of the problem you're trying to solve.

(02:17:25):
And that's where think of, don't think of
the AIs as a silver bullet.
Think of it as
specialized problem solvers.
And so the goal, generate
questions for a student.
If the input section defines student
level, adapt the questions to that level,

(02:17:47):
specific steps on what to
do, output instructions.
So the output instructions, remember I
said how you want to provide an example.
This is an example of how
output should be provided.
And then finally, the marker saying that

(02:18:09):
what comes next is input.
So using that, that's...
Is there a specific way
these should be structured?
Like noun, verb, in a sentence structure.
So is there a certain section, like tell
it to assume an identity.

(02:18:30):
We want it to do this and then do the
output look like this.
Is this input, input
output kind of thing?
Or is there any kind of specific order or
way that it has to go?
You're asking about syntax.
I'm advanced, yo.
I don't know that it
makes any difference.
I think just...

(02:18:54):
Okay, what I know versus what I assume
based on other things that I know.
What I know, I have no idea if the order
does make a difference.
I believe that it will based upon my
knowledge that large language models are

(02:19:14):
probabilistic and that the way their
responses are built begins with the word
that they encounter
and then branch through.
So just as I had to learn the hard way
that the order in which I provided

(02:19:34):
instructions in a webpage matter because
it reads from top to bottom.
So if it's an instruction at the bottom,
it will load last, especially on a large
page back when bandwidth was horrendous.
So I think it's going
to be the same thing.

(02:19:54):
Every set of instructions or every
reference that I've come across that has
said how to build this context like
structured model always began with the
persona definition first.
I don't know if it
would make a difference.
I would love to see the

(02:20:15):
results of an experiment.
I don't think I'm going to get to doing
that anytime soon, although I'd love to.
Yeah, no, I mean,
logically that would make sense.
You'd want, out of the whole of the
universe, you'd want to say
you're the expert in this.
As you go down, you're
getting more and more specific.
That would make complete sense.

(02:20:35):
It's filtering everything out, right?
So you don't want it to ask about nuclear
physics if all you want to know about is
cosmology or something like that.
Okay, so here's the command to make the
quiz maker model file, and I did the same
thing as I had done before,
and it took just that long.

(02:20:57):
And it's running.
There you go.
All right, from our audience.
Yeah, take-- What kind of
quiz shall we compose today?

(02:21:18):
You have an idea, Amanda? A
subject near and dear to your heart?
Subject near, oh, there's so many.
I can't think of anything good.
I wanted to think of something funny and
not something just
like boring cybersecurity.

(02:21:39):
About incident response log analysis.
Log analysis.
Boy, you really know how
to party there, Amanda.
I was trying to think of something funny.
I'm like, I'm gonna-- It's late.
Come on, give me a break.
Okay, all good.

(02:21:59):
Who is your target audience?
Are these experienced hands?
Are these people-- My mom.
Oh.
There we go.
The target audience is
little old lady from Pasadena.
Non-technical.
Yes.
Yes, non-technical.

(02:22:21):
You could put that too,
little old lady from Pasadena.
Amazing.
I do wonder what it's gonna come up with
because this would be fantastic.

(02:22:46):
Okay.
Question one, what is the primary purpose
of an incident response log and why is it
essential for organizations?
I don't know.
And the answer provides an incident
response log is used to document and
track all incidents, including security
breaches, system crashes, or other events

(02:23:07):
that affect the organization's systems.
It almost reads like explain like I'm
five dot Wikipedia or something like
that, doesn't use the big words. So.
That's pretty cool.
Yeah, it's so bad.
Not bad at all.
So the examples that I used were also to

(02:23:28):
give a little bit of dynamic range.
Let's see.
Let's do...
About local solar system astronomy

(02:23:48):
and the target audience.
Whoops.
Is fifth graders.
Okay.

(02:24:10):
So what is the order of the planets?
Hmm.
Yeah, and it left Pluto off.
That's crazy.
(Laughing)
What planet is the red planet?
Mars.
What is a moon?
And which planet in our
solar system has no moons?
Mercury.

(02:24:31):
Well, actually there's more than one.
No, Mercury is the only one.
Mercury is the only one. No, Mercury is the only one.
Even Pluto has a moon. Venus?
Huh.
It may have natural satellites.

(02:24:51):
To the cloud.
I don't know. That's...
Venus has no moons.
It and Mercury are the only planets in
our solar system
without a natural satellite.
It does have a quasi-satellite that's
officially been named Zuzvi.
You just made that.
No, you did not.

(02:25:12):
Zuzvi, okay.
Venus facts. Wow.
That's Venus facts from NASA science.
Z-O-O-Z-V-E.
Orbit's relatively close to Earth but
does not pose a threat to our planet.
It's the first identified
quasi-satellite of a major planet.
Yeah, we also have quasi-satellites,
including a small
asteroid discovered in 2016.

(02:25:34):
It's too small to be a moon, right?
It was a quasi-satellite?
That's no moon.
That's a space station.
Okay, and to show dynamic range, same
setup, only different target audience.
College sophomore.

(02:25:56):
Oh, shoot, yeah, there we go.
Definitely longer.
Page point.
The grade's close, yeah.
This is atmosphere.
Axial tilt driving
seasonal changes, very nice.
Very nice.
Fantastic.
So, and the question that you had,
Amanda, was wonderful.
Yeah, you shouldn't ask me.

(02:26:19):
It's been a long day.
I should have asked an Easter question.
Like, we're recording on Easter.
That's right. Oh, that's right.
I didn't even think about that.
Also, I had to use for everyone.
I could have been a
little more organized.
I could have made an
Easter bunny model file.
Yes, yes.
I'm too tired to do it now. Nice.

(02:26:41):
But yeah, it wouldn't be that tough.
That's pretty awesome.
And the good thing is, you did this with,
I mean, so, okay, so one thing you did
mention, thread rippers and GPUs,
what are the options for folks that can't
afford a $1,200 GPU and the
technology to do it at home?

(02:27:02):
Is there, obviously, there are options
for folks like
DigitalOcean or something like that?
A lot of stuff that you would do with
Jupyter Notebooks, you
can do with Google Cloud.
Okay.
And as you would with other AI spaces or
hardware systems as a service, you

(02:27:27):
upscale and downscale your needs for
whatever the duration is.
And with Google Cloud, if you run
something, it wakes up
and it gets things spun up.
And then if you don't do anything for 20
minutes, it all kind of goes into a
shutdown mode and would have to be woken

(02:27:47):
up again, which is a power saving feature
for them and good for them.
You can also, I mean, Amazon Web Services
are not hugely expensive in the short
run, just make sure that you put limits
on how much you can generate.
Because I have heard about

(02:28:09):
large bills being forgiven.
I don't think that's
something one should count on.
Exactly, yeah.
They get crazy after a
while if you do too many times.
So, you know. Yeah.
New account, never had one
before, do it once, maybe.

(02:28:30):
But yeah, like I say, not something you
should try and count on.
But yeah, there are lots of service
providers and you can do things without
having a massive graphics card.
Now you saw how quickly the files created
on this current system.

(02:28:50):
My old system, it
didn't take that much longer.
You could maybe count to
three, which is still bad.
And then running, it wasn't too heavy.
Pardon?
How about like a five year old Mac book?
That's the other one I
have, that's all I have.
You'd probably do fairly well.
Because Macs by their

(02:29:13):
native architecture,
the reason that Macs are wonderful for
creatives is because of all of the video
processing power because of
how creative applications.
So even if it's a five year old Mac book,
you probably have more architecture
support for running Olama than an

(02:29:35):
equivalent Chromebook or run of the mill
Windows based PC or laptop would have.
So I'd say go for it.
And similarly, if you have devices
hanging around that
you don't have a use for.
I think I've got like, I've acquired or

(02:29:58):
inherited three or four
different laptops of different ages.
And those are glorious machines for
loading bare metal Linux.
Right, right.
And then you can really pare it down and
just do what you need to do.
That's awesome.

(02:30:20):
Well, that was great.
It's super easy.
If you're watching this on the audio
podcast, I encourage you to come to
YouTube and watch the demonstration.
And it's nice that we have the
opportunity to do this.
And I think it's something I would hope
everybody can do if you can install
Linux, you could probably install this.

(02:30:42):
And if you can, there's
still ways to do this without.
I mean, you can run this on Windows as
well with like a Windows
subsystem for Linux is available.
But yeah, I think we need to learn it.
And that's why I'm kind of learning it.
I think we failed the industry when
everybody was so reticent
about moving to the cloud.

(02:31:03):
And then IOT, it was like, oh,
everything's bad and whatever.
We need to learn how to use this stuff
and get ahead of it.
Or it's going to, we're going to look
like the Luddites again for a few years
before people start learning.
It's still not the enemy reason why
washer and dryer needs to be able to talk
to anyone over the internet.
I still don't see the need to query my

(02:31:26):
refrigerator from work onto whether or
not I have enough milk at home.
It's none of my
refrigerator's fricking business.
But it wants to be able to remind you
when you are low on milk.
You know what?
I got a new washer dryer combo.
It's like the all in one unit.
And I didn't even know this was a feature

(02:31:49):
because everything's smart now, right?
So it's a smart one.
I was going to play music in my kitchen
the other day and realize that I could
stream Bluetooth audio to my washer.
So what's your favorite feature?
Like why is that a feature?
I did it because I wanted to see how

(02:32:09):
great the speaker was.
But I- How good is the speaker?
It's terrible.
Oh, well it's not supposed to do
polyphonic tones and stuff.
I'm going to give it some really bad
reviews just because the speaker quality
is just terrible in
my- You take the front
panel off and then you can solder on a
nicer speaker and you
know, or a Bluetooth.

(02:32:29):
Yeah, yeah.
I know.
You know,
I never thought I'd use my, you know, my,
my, my, the monitors I've got sitting
over there that tell me things, but every
once in a while I'll be like, you know,
set a timer for 15 minutes or, you know,
add this meeting to my calendar because
sometimes it'll be like,
oh, I've thought about it.

(02:32:50):
And then, oh, let me go find my phone or,
you know, my ADHD is flaring.
And it's like, oh,
I'll go find my calendar.
And then I forget.
So if I could say it
immediately, then it's already there.
So there are some
things to embrace there.
I think the biggest problem with IoT is
kind of like the same
problem we're having with AI.
Where's the data going?
What are they doing with it?

(02:33:11):
How are they going to use
it against us in that case?
So I remember years ago, years, years
ago, reading a science fiction novel and
the way that one of the bad guys in the
novel was able to keep tabs on what other
people were doing was by monitoring their

(02:33:34):
physiological readings.
They had a 24 seven digital readout going
to a main medical and yeah, they were
able to just notice his difference in
respiratory and
heartbeat and all of that.
And basically know exactly what that
person was doing. Wow.

(02:33:56):
And since they had more than one, they
were pretty much able to tell with whom.
Yeah.
Or without whom.
Right.
That's nuts.
Yeah.
Well, I, you know,
I- That power.
What's that?
People don't think about the consequences
of having access to

(02:34:18):
that kind of information.
Yeah.
And a careful note.
For me, I can smell dinner wafting
through the hallways.
So to wrap up, it's time.
Yep.
We've been doing this two hours almost.
Yeah, almost.
But for the

(02:34:38):
invitation, this has been fun.
Yeah, it's fun.
Yeah, Bronwyn, can you tell people where
they can find you, how they would get
ahold of you if they wanted to?
LinkedIn is really the best place.
I do also have my own website.
They can send me messages there.
I haven't got a regular
stream or anything as yet.

(02:34:59):
Haven't quite gotten organized enough for
that, but it's on the to-do list.
Awesome.
When I get to that stage, I will
definitely send out press releases.
Fantastic.
That's awesome.
Ms.
Berlin, how would people find you if they
wanted to talk about incident response,
threat intelligence, you know, user
usability and security?

(02:35:20):
Same thing, LinkedIn.
And then I will be at RSA next week.
Oh,
all right.
Yeah, you got your calendar open to make,
you know, have-- No prisoners.
Yeah, they're not worth the trouble.
Exactly.
I would probably just be
standing at our booth nonstop.

(02:35:40):
So go up and find me.
Okay, yeah, get a hedgehog, you know, so.
I'm so excited because our whole
marketing thing I helped with and it's
all around like sleep.
And we have t-shirts that
just say, I need an app.
I need an app. Oh.
Oh.

(02:36:00):
I'm so excited.
That's fantastic.
That's fantastic.
Yeah, so yeah, go check Ms.
Berlin out at RSA, go check out the table
and get a, you know, get a shirt.
And yeah, hope
everyone has fun doing that.
You can find me on LinkedIn as well.
They're in the show notes.
You can find the show notes.
I'm also at brianbreak.com on Blue Sky

(02:36:23):
and I'm on Reddit a little bit.
I'm actually taking a little break, but
I'm on Reddit normally on various blue
team, red team, you know,
cybersecurity, Reddit, so.
We'll be back on our stream on Tuesday.
I'll be back.
We'll be doing some more Python, probably
some object-oriented programming as part
of the backend developer bit.

(02:36:44):
So we'll be learning as we go.
And Mr.
Betschy, you can find
him on LinkedIn as well.
And he was still
having some internet issues.
So we'll catch him again soon.
I'm Brian Break and
thank you Bronwyn for coming.
Appreciate your time.
And Ms.
Berlin, it's always great to
see you and have fun at RSA.
Yeah, thank you.

(02:37:05):
Awesome.
Well, that was it for
Breaking Down Security this week.
Y'all have a great week.
Take care of yourselves.
As we're fond of saying around here,
you're the only you you have.
So you need to take care of yourself.
And we'll talk to you soon.
All right, bye everybody.
(Upbeat Music)
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.