Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Mark Smith (00:01):
Welcome to the
Co-Pilot Show where I interview
Microsoft staff innovating withAI.
I hope you will find thispodcast educational and inspire
you to do more with this greattechnology.
Now let's get on with the show.
In this episode, we'll befocusing on AI security trust
(00:21):
responsible or being responsiblewith AI trustworthy AI.
My guest is from Seattle in theUS.
He works at Microsoft as asenior offensive security
engineer, so AI red team.
I love that.
He's a public speaker, blogger,hobbyist, game developer.
He loves software and gamedevelopment, of course,
(00:43):
technology in general, music andalso a fan of Oxford comma the
Oxford comma.
I like it too.
I always use it.
You can find links to his bioand socials in the show notes
for this episode.
Welcome to the show, yoris.
Glad to be here, mate.
It's been a while since you'vebeen on.
It was episode 485 and way backin 2023 was the last time
(01:06):
you're on the show.
You're talking about powereffects in a totally different
category than you're in now whenin 20 was it pre or post chat
gpt pre?
yeah, it was pre, pre-chat gpt,because it's september that year
, so it's been a while.
The world changed.
Yeah, I'm excited to have youon and, by the way, if you're
(01:27):
listening and you want to findout about food, family and fun,
I'm not going to ask yours again.
You can go see that on thatepisode back there.
And, as journey into tech, I'mexcited about your current role,
ai red team.
When I saw that you were movingin there, I pretty much asked
you straight away to let's havea chat, and you're like let me
get my feet under the desk andsee what I've got myself into.
(01:49):
What did you get yourself into?
All sorts of exciting things.
Joris de Gruyter (01:54):
So at
Microsoft, any feature that
involves AI has to be heavilydocumented and submitted to a
central board, which is a hugething.
For the size of the company andthe breadth of the portfolio of
products that we have, this ishuge.
Everything has to be centrallyapproved and so as part of that,
(02:15):
I mean there's security reviews, privacy reviews, all sorts of
things that have to happen, andthen it gets to a central board
here in Seattle and so, based onwhat they see, they can decide
that hey, maybe this looks likea high risk usage, or they feel
like maybe the team doesn'tquite understand the potential
issues, or maybe it's justsomething super interesting.
(02:38):
They basically kick that to us,to the AI Red team.
So we get involved.
We test the productadversarially, but also as a
regular user.
Sometimes, as you've probablyseen in the news as well,
sometimes a regular user asks avery benign question and gets
some really interestingnon-intended responses.
So we test for all of thesethings and I guess it kind of
(03:00):
depends on what the product isor does exactly, but this can go
anywhere from testing a modelitself.
So the Microsoft like the smalllanguage models, for example
the Phi series.
We usually get our hands onthose before they go out.
So we test sort of more in abroad scale, like how
susceptible is it to jailbreaks,things like that?
(03:21):
Or we dive straight intoproducts and see, hey, can we
get it to generate bad images,or can we get it to exfiltrate
data from your power, platform,environment or whatever it may
be, and we usually have just acouple of weeks to do that.
So, depending on what exactlyis happening, this could be
anywhere from two weeks to fourweeks of us and it's interesting
(03:43):
not just because one you get tosee sort of tip of the spear
type of things constantly haveto adapt, learn new things.
Sometimes, a lot of times, weget to see products that we've
never used before.
So you sort of have to dive inand figure out.
So it's been a super, superinteresting thing.
So it's a mix of sort oftraditional security things.
Sometimes it's a mix ofresponsible AI, like okay, is
(04:06):
this product not going to be toobiased against or for something
, whatnot?
Or is it like potential forharmful content?
Like in what way is this beingused?
Is it, you know, justgenerating images or children's
stories, which you know there'sa lot of these that are
immediate red flags, when youjust hear that.
Mark Smith (04:29):
But yeah, so it's
very broad but super, super
interesting.
A lot of people listening won'tknow what red teaming is or
what the concept of what a redteam is, as opposed to, in the
industry, a blue team.
And can you just give us a bitof origin about this?
Is not a new concept.
Where it's from?
It didn't come from neochoosing the red or blue pill in
the matrix.
That's not where it's from.
It didn't come from neochoosing the red or blue pill in
the matrix.
That's not where it originated.
Can you give us a bit of abackstory and explain?
Joris de Gruyter (04:49):
also, doesn't
come from halo red or blue team.
Yeah, yeah, it's no.
I think this originated in themilitary, I believe during the
cold war or something, wherebasically the military was
trying to do these exercises.
Where they were, they wouldpretend to be a communist
invasion force of some sort andtry to break into things or
attack things and do thesesimulations.
(05:11):
So this is carried over intosoftware security, where
typically red teams are teamswithin or outside a company
sometimes that essentially tryto break your own product, try
to break in and see how far theycan get, break your own product
, try to break in and see howfar they can get.
And then the blue team istypically sort of a defensive
team that looks into how can weyou know what's going on with
the product?
Do we need to shore up thefences, do we need to look at
(05:32):
something?
And then, technically, I guessthere's something called a
purple team as well, where thered team and blue team sort of
work together in the same room,where the red team's like oh, I
think I can get in, and then ablue, oh, let me come check
what's going on.
yeah, in a way that's more kindof what we do on the ai red team
, I guess, is, you know, wecollaborate, we sort of get on a
phone call, try to understandthe product and then we go into
(05:54):
our thing and we have constantcommunication like on at least
on a weekly basis.
We're like, hey, here's thethings we've been seeing.
You guys should look at this.
Yeah, so the term red teamingis really very security focused
thing, but it came from themilitary originally.
Mark Smith (06:09):
Another term I often
see in conjunction with red
teaming is chaos engineering.
Is that something that youtouch into as well, and can you
explain what your definition ofchaos engineering is?
Joris de Gruyter (06:20):
Well, I'll
fully admit that I'm still
learning a lot about the verytraditional security stuff as
well.
But yeah, chaos engineering isvery much on the security focus
side and, like I said, what wedo is very broad.
So on our team it's a verydiverse team in the sense that
we have folks on the team arelike PhDs in machine learning
and we have traditional securityexperts who know all about that
sort of thing like chaosengineering, whatever
traditional security experts whoknow all about that sort of
(06:43):
thing like chaos engineering,whatever.
They have folks like myself whohave a product development
background.
So you know, it's very, verybroad.
We have folks that are involvedwith the military.
So we don't directly doadvisory but we do attend a lot
of.
We have some folks that attenda lot of sort of military and
policy type of conferencesdirectly on our team, but us at
(07:04):
microsoft are involved at thatlevel.
So it's it's super, super broad.
Mark Smith (07:08):
so, yeah, yeah,
chaos engineering, but then also
like policy and it's like allover the place, just as we hit
christmas, I saw a whole bunchof stuff not from microsoft come
out around uh, jailbreakingllms and I saw some stuff where,
like if you construct yoursentence in camel casing and
(07:30):
that was the one that stood outto me because I'd seen the term
in a developer, you know, in mybackground, and I was like I
would assume and obviously in acall like this we can't talk
about the actual techniques andstuff because we're trying to
prevent to be used for harm,right?
Joris de Gruyter (07:49):
I mean a lot
of it.
There's plenty to be found.
A quick Google search away,yeah exactly, exactly.
Mark Smith (07:57):
But is it a case of
you just following patterns and
procedures to do this testing,following patterns and
procedures to do this testing,or do you find yourself in a
situation where you're like I'mgoing to try, like is there
always that you're learning newways to break?
Because it's interesting, I justwent through a process
yesterday of implementingPasskey in my M365 tenant right
(08:20):
for my organization and I wasjust like I watched this video
and this person was like well,haven't we just got MFA?
And then before before mfa, wehad sent a text message and I'm
like, and he was like becausehackers are always learning new
techniques, we always have tolearn or create new security
measures on the fly.
(08:40):
Is the speed of thingshappening that you're coming up
with, hey, based on, based onthese things happening last week
?
I'm going to try this waybecause you've had a new mindset
change or something like that,or a new and, of course, with a
diverse team that you have,you're a mounting pot of.
One idea is going to sparkanother idea.
How fluid is it and howstructured is it?
Joris de Gruyter (09:02):
I feel like it
sort of comes in waves a little
bit right, especially at thecompany.
So you know, we do sometimesget involved with OpenAI as well
when they release something new.
Obviously we're a big partner,so we usually get our hands on
that as well and that's usuallysort of a spark, right.
Let's say, you know, when theO1 stuff came out with the chain
(09:22):
of thought reasoning for us, Iwas like, okay, we need to come
up with something new, like howare we going to play with this?
How are we going to test this?
And then that happens.
And then there's usually a bitof a lag and then you start
seeing products at Microsoftadopting it or figuring out ways
.
So we usually have like alittle bit of warning that like
something new is coming out,because that's sort of different
(09:44):
things, right.
So one is maybe somebody comesup with a new way of using the
existing LLMs in a novel way,like some product innovation,
like oh, we're going to do this,and then obviously we'll have
to figure out how can we likethere's an assumption that we
can always jailbreak a model,but what does that mean in the
sense of how it's used in theproduct?
Maybe it's not even useful atall that we can show.
(10:06):
There's some novelty therebased on the product, but then
there's also novelty based onjust the new ai stuff that's
coming out.
So you know, we had, I think inthe fall we were looking into a
lot of the audio stuff.
Right, that's coming out, sortof the advanced voice stuff.
I mean, that's completely new.
We need to come up.
Okay, can we still jailbreak?
Do we just say that thejailbreak out loud, or how does
(10:29):
that work?
But then you come up with newthings like okay, well, same
with text actually is like howdoes it react to different
languages or different accents?
And that could be a thing withharmful content.
But it could also be to yourpoint like in text you're doing
capitalization or you're likeusing emojis to sort of maybe
get past certain filters.
What does that mean when we'redoing audio?
(10:50):
And then you got the same withimages, right, and now we're
starting to see video chat,right, we are basically talking
and it can see you, and sothere's novelty in all these
things.
And in fact, recently on theteam we've sort of separated out
because we've been hiring a lotof folks who've been growing
just to keep up, and part of.
We have now a separate teamthat can take because, like I
(11:10):
said, we usually only have a fewweeks to work on these things
because, of course, they're on adeadline.
They want to release it.
So we now have a team that'sdedicated to doing sort of
longer term things, where maybewe look at a product and we say,
hey, this is something noveland we feel pretty good about
what we've tested, and then theboard can decide to approve it
or not.
But maybe there's something wewant to go further, we want to
(11:31):
play more with this, just tolearn more or to see more.
And then so we now have well,not separate, I mean, we're
still sort of together but groupof folks that are sort of
taking this and doing moreresearch longer term and
actually also trying to stayahead of it, where I think,
video right now we're strugglingwith.
you know, with text you cangenerate a lot of text and you
can you know, we have our opensource tool called a pirate that
(11:54):
we can use to send thousandsand thousands of messages really
quickly to an endpoint to seeall the stuff that we get back.
How does that go with video,like, are we uploading gigabytes
?
Can we generate videos?
Yeah, can we use something likesorat generate bad videos to
send?
I mean, there's all sorts ofthings.
So for sure, it's a constantlymoving thing and it's a good mix
(12:16):
of tried and true stuff, butalso being creative on the spot
based on what we're doing youcan deny or confirm this.
Mark Smith (12:24):
Up to you.
It's an edgy question.
Are you playing with o3 yet?
Joris de Gruyter (12:30):
I would not be
able to tell you if I was for
sure.
But yeah, we do get theseUsually.
I mean, obviously it's all verytight-lipped.
There's tons and tons of NDAs.
Mark Smith (12:41):
In December they had
said that they were getting all
their security piece.
It wouldn't be releasedstraight away because it was
going through that, and theywere even inviting, if you're a
security researcher.
Joris de Gruyter (12:51):
I'll say, you
know, I obviously follow that
news a lot and I'll ping mymanager to be like, hey, I saw
this thing.
And then, even if they know,they'll be like well, if we do,
I'll tell you when we do, youknow, kind of thing.
So I'll get it like last minute, like I think when we got 01, I
think that was literally likethree or four days before the
(13:11):
announcement was yeah.
Wow, that's not long.
I mean, I think there is someeffort to do to get them to give
it to us a bit earlier, becausethat's like work over the
weekend type of stress, whichwe'd rather not do too often.
Mark Smith (13:25):
How much is your
view of the world changing?
And you know, because this ison the bleeding edge of so much
change like we've never seenbefore.
And you were saying, you know,with the image prompting and
stuff you could get into somegrimy places right with what
you're doing.
And you know, I've talked withpolice photographers that you
(13:48):
know photograph crime scenes andstuff and they said, listen,
this is not something you do acareer on.
You've got a few years and thenyour mind is just like I'm done
, like I don't want to see anymore of humanity this way.
And so how much of I knowphilosophy and the way you look
at the world and the, even theseparation from sci-fi slash
(14:12):
modern day marketing, slashreality.
How does that all gel with you?
And I'm not saying how doesthat gel with Microsoft, I'm
asking from your perspective andwhat you can say, how do things
sit?
Joris de Gruyter (14:25):
So I'll touch
on the mental health aspect.
I mean, that was something thatcame up even in my interview
with the team before I joined.
It's like we do deal with thestuff and something to be very
much aware of, so there's a highfocus on that.
I mean, it's part of the job.
I wouldn't say that it'ssomething we deal with all the
time, but we do deal with it ona regular basis.
(14:45):
The good thing is we have greatsupport, so all levels of
management, we have a lot ofextra resources we can tap into
if we need to for mental healthsupport.
We do also basically try tomake sure we have a very open
culture with the team.
So it's very clear thateverybody has their breaking
(15:07):
point or there are things thatthey just can't deal with, and
we're very open with that.
We're like, if I have to dealwith this or with that, I'm just
not going to do it.
You have the right for refusal,basically.
Mark Smith (15:17):
Yeah, yeah.
Joris de Gruyter (15:18):
Which is great
.
Obviously, that goes intomultiple directions.
There's certain content isstraight up illegal, so we deal
with a lot of the attorneys aswell.
If we do have to go down thatdirection, we have to sort of
make sure there's certain thingswe literally cannot test, not
just because we wouldn't want tosee it, but also because it's
downright illegal.
And then there's other thingsthat you know we're in the US,
(15:40):
there's export controls, there'sall sorts of things.
So anything to do with likechemical weapons or whatever.
We sometimes have to test forthose things.
We have folks that arespecialized in that, but that
has to be handled with extrasecurity, like those things
cannot be shared with our remoteemployees.
So there's a lot of differentlevels.
But mental health is somethingwe're highly focused on and it's
(16:01):
great to see that we're tryingto be very open with each other
good culture.
So so far that's been great.
I forgot what the second partof your question was Philosophy
view of the world.
Oh yes, sci-fi marketing reality.
Our cvp reminded me the otherday that you know, we have to be
mindful of our skepticismbecause basically we deal but
(16:23):
nothing but failure modes yeahright.
So, although I'm stilloptimistic, I love the prospect
of ai, what it can do.
You know, I see it do nothingbut fail basically on a daily
basis, right?
So I do have to sometimes takea step back and be like, okay,
it's not all bad.
I do feel, obviously, there's alot of marketing, a lot of hype
(16:46):
going on.
I think there's tons of usefulstuff out there as well.
I personally do think that I'mstill waiting for a killer app.
To be honest, I think there's alot of cool stuff, a lot of
useful stuff, but I feel likethere's more there.
I feel like, even with thetechnology that exists today, I
feel like there could be moreinteresting things being done
that I haven't seen yet.
(17:06):
I think a lot of everybody'ssort of going for the quick wins
.
I think Everybody's trying tostay ahead of the game.
So things like, you know,summarizing and generating texts
and all these things SureSearch yeah, all cool, but I'm
still waiting to see somethingthat's like oh my God, this is
fantastic yeah.
Mark Smith (17:26):
For me, it's my
digital twin.
That's what I'm after.
It thinks like me, it acts,behaves like me and therefore it
can take a bunch of stuff off,but it'll act as though I am
absolutely doing it.
Joris de Gruyter (17:38):
Yeah, and
we'll see.
I mean this new sort of agentstuff that's coming out.
I mean, some of it is fairly,some of it is just repackaging
of what we had last year.
Mark Smith (17:48):
Repackaging of then
statements.
Joris de Gruyter (17:55):
Yeah, I mean,
there's always a level of
marketing at every company andyou know we're not immune to
that either.
But there's definitely actualnew stuff that's being built at
this agent and I'm excited tosee what people are going to do
with that.
I think it does have a positiveimpact on like accuracy and
things like that.
It actually I've seen so farpositive impact on security as
well, but I've also seennegative impact on security
(18:16):
where people go overboardwithout thinking and then you
know you have some of theseagent systems are basically
multiple LLMs in the backgroundtalking to each other.
You know, can we convince oneLLM to poison all the other ones
?
You know stuff like that we'relooking into.
So it's good and bad, but Ithink that to me it's an
interesting concept, but I feela lot of it is still very much
(18:38):
sort of research and I'm justwaiting to see the killer app.
I mean, don't get me wrong, alot of this is cool, right, and
a lot of people are saying, well, the goalpost is being moved
and in a way, that's true, right.
If you would have shown youChad GPT just five or 10 years
ago, you'd be, like wow, that'scrazy, that's amazing.
(18:59):
But then the question today islike, yeah, we're kind of used
to it and now we're seeing it'snot always accurate or it's
still amazing.
Technology right, but theproductivity part I think
everybody's still kind offiguring out for themselves.
Where can I use it to makemyself more productive?
But I think the killer app iswhat I'm looking forward to
seeing.
Mark Smith (19:18):
Yeah, I read earlier
this week that when the US
first put man on the moon, thatrocket mission was the
trajectory, et cetera, ofgetting there was all.
They failed all the way to themoon.
Right, and that skepticism,right.
It requires the failure to stayon track.
(19:39):
It requires the rocket to gooff track, to go, hey, you're
off track, I need to bring youback on track.
But it's about going, hey,we're failing, we're failing,
we're failing rather than hey,actually we're making these
incremental gains and we'removing forward.
And I see the same thing aboutmarketing.
I can go, oh, that's marketing.
And then, but I'm like, yeah,but it's sparking ideas.
It's sparking ideas of thefuture.
(20:00):
And you know, organizationsmove so slowly that I can
understand the need to put themarketing down, because in a lot
of organizations it'll be threeyears before they do something
about it.
But you have to get thosetracks laid so that that's
starting to become part of theconversation that's had in these
organizations, even though, youknow, when I think of agentic
(20:20):
ai, it's a copy of me, probablyfive or so years away from that
happening.
Joris de Gruyter (20:26):
You know,
that's what I want right now,
but I'm very much wanting thefuture now, but not everybody
does so yeah, and I think, likeI said, it's a lot of it is
research and there's tons ofcool research.
We circulate a lot of papersamongst our team where people
find things online and we sortof discuss it.
A lot of cool things coming up.
But, to your point, by the timeit gets to a product team and
(20:48):
then by the time that theproduct team actually
understands what the research istelling them or how to even
apply it, it all takes time,right.
So I know sometimes it feelsthat we're in this mode right
now, we're just throwingspaghetti at the wall and to a
degree there's some of that, forsure.
But I think what you're sayingis correct.
It's like also just productengineers trying to like figure
(21:12):
out what are the limits of whatit can do, what are the limits
of when it's useful, right,because there's some things like
, oh, this seems like cool, saveme time, and they're like it's
just annoying, it doesn't saveme any time, it's just annoying.
And yeah, it's a whole newworld also on user research and
user experience research, as faras how do we make it useful,
how does it not get it like,remember, clippy, right, I think
(21:32):
that's the perfect example,where you don't want that thing,
keep popping up saying, hey,you need some help with that.
Yeah, in some contexts it's good, but in some it's get annoying
quickly.
And so figuring those thingsout, I think at all levels of
product development, needs to befigured out.
Not just engineering, but userresearch to your point.
Marketing, yeah, but thecompetition's fierce, so
(21:54):
everybody's yeah full steamahead.
Mark Smith (21:57):
What do you know now
that you didn't know two years
ago?
Joris de Gruyter (22:01):
Wow, that's a
loaded question.
Of course that you can share.
Well, I mean, I joined thisteam in April, right of 2024.
So I feel like I've learned aton at a personal level, just
the skills, but also just aboutAI and LLMs, and what do I know?
Now I don't know.
(22:22):
I honestly I don't know how toanswer that question.
I think, from a skillsperspective, lots, lots I mean
just from simple things likehaving to deal with Python,
which wasn't really dealing withmuch before, all the way down
to machine learning, things thatI had no idea about.
Yeah, it's a lot.
Mark Smith (22:40):
It's cool Hopes and
dreams for 2025?
.
Joris de Gruyter (22:43):
Hopes and
dreams.
I mean, to be fair, when I madethis jump to the AI Red team
was a bit of a gamble, right.
I mean, I've been in biz appsfor my whole career pretty much,
which is, dare I say it, likemore than 20 years at this point
.
So you're basically saying youknow what I'm going to do?
Something completely different.
Part of it, of course, was likeokay, there's tons of
(23:05):
investment right now, but stillyou just don't know right.
It's a very different thingthan doing product development,
but so far it's been absolutelyfantastic.
I love for the previous pointhow much I've been having to
learn and step up and do allthose things and just getting to
see.
I mean, I love technology, Iknow you do too, so it's just
(23:26):
cool to see all the things thatpeople are working on, and I
like the diversity of the work,diversity of the team.
Like you said, we justsometimes have to put our heads
together and then somebody comesout of left field with
something you've never heard ofbefore and it's like you know,
we have ex-military folks thatsometimes say things like wait
what?
And then yeah, we should trythat, you know.
So that's super cool.
So the hopes and dreams, really.
(23:48):
At this point I'm still new tothis team, but I hope the
trajectory continues of whatit's been so far.
For sure, yeah, that's my bigthing, just keep learning.
Mark Smith (23:59):
Now I'm going to ask
for some advice.
One of the things that I talkedto you just before we went on
air is about FOBU, which isposted on LinkedIn today the
fear of being obsolete.
And my question is and of course, the concept behind it here is
that people worried about how doI'm upskill so that I don't
(24:19):
become obsolete in whatevercareer they're in.
We're not just talking abouttech, we're talking about
anything.
If folks are wanting to developtheir skills and you've said
you've been drinking from thefire hose in the last part year
as your skills are developed, ifyou're in this space or you
know, let's say, take me,technology is something that's
(24:40):
really, you know, core in mylife.
What go-do things do you thinkpeople should consider as part
of their training schedule overthe next I'm only asking in the
next three to six months,because I just feel everything's
evolving so quickly.
But if you were saying rightnow, let's say here I am a buddy
and I say to you hey, what doyou recommend I go?
(25:02):
Go study this, go study that,go check that out what would
your advice to me be?
Joris de Gruyter (25:07):
That's a good
question.
I think of myself a bit like ageneralist.
I love to know a lot aboutdifferent things.
I'm typically not like a deepexpert in any of them, but I can
hold a conversation kind ofthing.
But also I am a hands-on guy sothat to me I feel like,
especially with this issomething that I would encourage
anyone.
Like just play with it, try itout, even if it's doesn't seem
(25:30):
immediately useful.
I know my wife.
She's not as technical as I amat all.
She tries to avoid technologyif she can, in the sense that
she doesn't want to deal with itor learn it if it's too
difficult.
But she just got into itherself.
Obviously, english is oursecond language for us, so for
her it's been super helpful tohelp write things in her job.
(25:52):
So once she discovered that,she just doesn't want to give it
up anymore.
But it's one of those thingswhere I had to show her a few
times like, hey, here's what youcould do.
Or she'd come to me.
It's like, hey, can you help mewith this?
Like just let's pull up Copilotand try it out, see if it.
And so that's one of the thingsjust playing with it and trying
it out, and, of course, you maynot even have the access to do
(26:16):
it, but there's tons of freestuff out there as well.
So I feel like hands on is agood thing.
You start feeling out what canI do, what can't I do.
It's like Google search, right.
When we first started doingsearches it was like pure
keywords based, and then, asGoogle you know, we now call it
Google foo is like somebodywho's very good at doing
searches the same kind of thing.
You sort of have to figure outhow to use it, and I feel it's
(26:38):
the same with this gettinghands-on.
And then, for the rest, for me,I love reading.
So maybe I'm the only user outthere that still uses RSS feeds,
but I don't like algorithmiccuration too much.
So I really do not liketimelines on Facebook or
whatever it is where stuff getspushed to you.
Based on what you saw yesterday, I was like I don't want to see
(26:59):
the same thing, I want to seesomething new.
So I love RSS feeds where Isubscribe to technical websites.
Ars Technica is one of myfavorites, but there's a ton of
them and I basically I scrollthrough my RSS feed, like my dad
used to read the paper.
It's just like scanning theheadlines, like, oh, this looks
interesting, I'm going to readthat.
(27:22):
I do a ton of that and I liketo curate for myself, not just
have something fed to me througha YouTube algorithm or whatever
, because I want to discoverthings that I don't know yet,
like and sometimes you don'tknow what you don't know, right,
yeah, and so finding somegeneral websites or podcasts or
whatever to follow will sparkideas or things like I didn't
know this existed or I've neverheard of this.
Let me look that up.
That's how you learn right, andthen trying it hands-on to me
(27:44):
is super useful.
What's your feed reader ofchoice?
I use Feedly, which I think isnow owned by Google.
Mark Smith (27:51):
Yep, yep, yep.
And it came from one of theother best products.
I remember when it transitionedthrough.
Interesting you say that it'smade me go, oh my gosh, because
I've just decided.
And it's what is it?
Generally?
The ninth for me, eighth foryou, I suppose.
I've decided to leave all socialmedia this year after being
heavily in it last year, likeright.
(28:11):
One of my challenges I setmyself was to do 100 tiktok
videos and 100 days of posting atiktok.
So I'm very heavily.
I was pretty much posting sevendays a week on LinkedIn, blah,
blah, blah.
I've decided and I've removedall the apps.
They're gone.
Well, I've just said, oh, youshould look at this Instagram
reel.
I said don't have an appanymore.
Come and show me on your phoneRight, and because I'm sick of
(28:40):
the algorithmic right summer,like when, the minute you said
that it was like a light bulbfor me, I'm like that's it,
that's the right definition ofit.
And I used to love rss, likesix, four, four, maybe four
years ago.
I feedly, I had it.
I'd go in and, as you say, readthe scan headlights and you've
just shown me, of course, how tobypass algorithm.
I'll hit you up for your OPMLfile.
Joris de Gruyter (28:59):
Yeah, I mean,
I went through mine actually and
I got rid of a bunch of dynamicstuff that I'm like, okay not
that I want to lose all linksthere, I love that community but
I skimmed through when I gotrid of a bunch of stuff and I
added a bunch of like securitythings and AI things.
And again you just read thingsthat you're reading and you're
like I have no idea what they'retalking about, but you just
(29:20):
immerse yourself.
Right, it's an immersion in away, and you can pick and choose
things that an algorithm wouldnever suggest you read that
article.
No, because by definition, youdon't know what it is and the
algorithm knows that you'venever looked at it before.
So that's is a big thing, yeah.
Mark Smith (29:34):
I love it.
This has been a massivelyawesome conversation.
Thank you so much for takingyour time.
I look forward to the next one.
Joris de Gruyter (29:43):
Yeah,
absolutely.
Mark Smith (29:44):
Thanks for having me
.
Hey, thanks for listening.
I'm your host, mark Smith,otherwise known as the NZ365 guy
.
Is there a guest you would liketo see on the show from
Microsoft?
Please message me on LinkedInand I'll see what I can do.
Final question for you how willyou create with Copilot today,
ka kite.