Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:13):
Welcome to the
Digital Forensics Now podcast.
The year 2024, and my name isAlexis Brignoni, aka Briggs, and
I'm accompanied, as always, bythe examiner from the frost up
north, that tundra up there, thecomplainer in chief, the one,
(00:38):
the one that will wear her redshoes on Tuesday that knows that
there's no better place thanhome, the one and only, or no
place like home, I should saythe one and only, heather
Charpentier.
The music is Hired Up by ShaneIvers and can be found at
(00:58):
silvermansoundcom.
Let me tell you you got luckythat I forgot to take the
overlay so people can see yourface.
So I totally forgot, that'sfine, that's fine.
And the abrupt stop in the musicBoom Done.
Speaker 2 (01:13):
That's all right.
That's all right.
We don't want any more musicright now.
Speaker 1 (01:18):
Hey Heather, what's
going on?
Speaker 2 (01:20):
Oh nothing.
Thank you for the greatintroduction, as usual.
I don't think anybody probablyknows what the heck you were
referencing, but that's okay.
Speaker 1 (01:29):
Yeah, because you're
going to tell us right now oh,
all right.
Speaker 2 (01:33):
So, yeah, it's cold.
It's cold in New York and I'mbeing picked on because I have
to go out in the morning in thefrost and my friend here is in
Florida where there is no frostfriend here uh is in florida
where there is no frost.
Speaker 1 (01:47):
So hey look, I mean,
I mean we have, we have.
We did a lot of crazy stuffhere, but at least the weather
tends to be more kind, at least,well, at least sometimes, not
always, oh, yeah, I think youmight have a story or two on
that.
Um no, but tell us tell us more, explain more, so okay, so it's
cold up there, everything'sfrosty and what's going on?
Speaker 2 (02:02):
30 degrees.
This morning I was freezing todeath, and then the no place
like home comment with my redslippers.
Yes, next week I'm on vacation.
I'm so excited to go onvacation.
I am taking my sister for her40th birthday.
She's getting old.
Speaker 1 (02:23):
Wait, wait, wait.
I take an objection, your Honor, about the whole being 40 and
old.
Speaker 2 (02:28):
comments I'm older
than her, but she's joining me
in the 40 club now.
How's that?
Speaker 1 (02:34):
Well, I mean you
could be 40 and old, or you
could be like me and be 40 andnot old.
I'm just saying.
Speaker 2 (02:39):
Oh, if you say so, if
you say so, but we're going.
So my sister is a completeanimal lover and we're going for
her 40th birthday to a wildlifepark in Kansas, which is where
I'm going to get no place likehome, and it is actually like an
interactive wildlife park whereyou can interact with the
(03:00):
animals.
We're going to swim withpenguins, we're going to meet
and greet lemurs and anythingelse.
You can think of Giraffes, myfavorite.
So we're doing that next weekand I'm super excited.
Speaker 1 (03:11):
You're like New York
Snow White.
I think I told you that already.
Did I tell you that already?
Speaker 2 (03:15):
You did.
You did the bird cam the birdcam.
Speaker 1 (03:19):
There we go.
But you're also into the horsething, the horses thing that you
did last year and now it's thepenguins and lemurs and stuff.
Speaker 2 (03:26):
I love animals.
Speaker 1 (03:28):
You're like Snow
White, like legit.
Speaker 2 (03:30):
I like animals more
than I like people.
Speaker 1 (03:34):
I think that's
something that we can all agree
on.
Speaker 2 (03:36):
It's easy, and my
sister loves animals even more
than I do, so it should be agood time next week.
I'm very excited.
Speaker 1 (03:44):
Oh, yeah, yeah, I'm
excited too.
Speaker 2 (03:46):
Oh, yeah, yeah.
Speaker 1 (03:47):
Oh, I saw it.
I saw it.
Thanks, Kevin, for the insightNow.
So, yeah, read it, read it,read it.
So the people that arelistening Ask her about goat
yoga.
Speaker 2 (04:01):
One of my lovely
co-workers has chimed in.
Yeah, I mean, goat yoga isawesome.
Speaker 1 (04:06):
If you haven't had a
chance to do goat yoga you have
to go into the, the studio andlook, I'm gonna assume goat is
the greatest of all time.
You're like the greatest yogaI've ever done no, no.
Speaker 2 (04:14):
You go do yoga and
you don't really do any of the
yoga because you're too busyplaying with the cute little
goats that jump up on your backas you do the yoga poses.
Speaker 1 (04:21):
It's so much fun oh,
wow, it's like uh, those
massages that they walk on you,I guess.
Speaker 2 (04:26):
Yes, and they're so
cute, but then they try and eat
your hair and your clothes andthey poop on you, so it gets a
little messy.
Speaker 1 (04:32):
Yeah, I don't know
you're selling me that part as
good.
Now it's fun.
Speaker 2 (04:37):
I'm telling you, it's
worth it.
Speaker 1 (04:45):
Hey, it's always fun
to get pooped on.
My kids used to think so whenthey were little.
Anyways, well, that's awesome.
That's awesome.
Uh, the new york snow white,that's great, and I'm looking
forward for.
You know you work hard, so youdeserve to.
Speaker 2 (04:52):
Uh, to have a
vacation and have animals poop
on you, that's great well, thankyou, I'll bring back pictures
not of the poop but of theanimals.
Speaker 1 (04:59):
So well, well, um, uh
, what can I tell you?
You know we're talking abouthow it's cold up there and not
so cold here, but oh wait, kevinjust said something.
No, it's not, trust me.
I guess Kevin has had someexperience with that, with the
kids.
He's having all that parentingexperience.
This is great.
(05:19):
We're definitely bonding overthat.
No, but Kevin was saying thathe'll take the snow and ice over
hurricanes, and you know that'suh, I don't know about that, I
guess it's, it's it's abouttaste.
Oh, now, now I'll be, I'll notget serious.
Look, we had a, we had ahurricane in florida last week,
melting, as I mean most peoplewatching the news and uh, it was
(05:41):
serious for the west coast, fortampa, a lot of flooding, you
know, a lot of you know.
So it was serious.
I won't deny that and that'snot something to make fun of,
right.
So there's some risk to livinghere, but there's risk to living
everywhere.
If you're in san francisco butin california, you might get
earthquakes.
In some other parts in the, inthe mountains, you might get
wildfires, um.
So there's always risk whereveryou live.
But I guess you pick, you pickyour, your the risk that you're
(06:03):
willing to take on, right.
So I'm okay with the hurricanes.
We got lucky here in Orlandolast week.
My opinion again is from beingup in Seminole County, which is
kind of north of Orlando.
We only got like some wind,some rain, but it wasn't as bad.
Like Ian a couple of years ago,I felt it here to be worse,
like more wind and more rain.
(06:23):
That's just me here.
Speaker 2 (06:24):
Yeah.
Speaker 1 (06:25):
So so you know, I
didn't even lose power.
I had like the light flickeracross the night like four or
five times and thankfullynothing got fried with the
flickering that's good, that'sgood yeah but we didn't lose
power so I didn't get to use mygenerator.
You know I get excited.
You know thing, you know but,hey look, andrea.
Andrea is in the chat yeah,hello she's awesome, good to see
(06:46):
you here.
Um, so yeah, so no, we're lucky,nothing happened to our house
and uh, you know things arerecoveries be moving along and
uh, and hopefully the tampa bayarea, uh, you know, kind of from
cape coral for myers up north,all that coast hopefully they
get the power back soon and andrecovery efforts are successful
and quick.
So yeah, I have a good friendthere that get the power back
soon and and recovery effortsare successful and quick.
Speaker 2 (07:05):
So yeah, I have a
good friend there that has the
power back now in tampa area, soyeah, that's awesome.
Speaker 1 (07:14):
That's awesome.
They're doing good.
They're doing good work.
So, yeah, so a lot of thingscoming.
A lot of things happened lastweek and uh, but now, uh, let's
get to the meat and potatoes ofof of the show.
So what's what's going on inthe last couple of weeks that we
need to let folks know about?
Speaker 2 (07:36):
We have a public
service announcement about your
extractions.
That's where, um, you can findlike updates on like things that
are happening with uh thecelebrate tools, but also with
extraction issues or other othertypes of digital forensic
issues.
Um, and there's an announcementin there that uh recently
discovered there's an issuewhere android keystore is not
(08:00):
functioning on some devices.
Um, it may have an empty keystore file and that file is
called secretsjson, so it maycause either an empty file or
the file not being present atall.
They're saying that it's due toexternal factors and it's
primarily affecting devicesrunning Android 15 or ones with
(08:24):
newer version of Google Playservices.
So keep an eye on yourextractions.
If you're missing that keystore, you may be missing data
like sessions, or if the SamsungRubin, if that needs the key
store to decrypt that data,you'll be missing that.
If your key store is notextracting properly or not even
(08:45):
extracting at all.
Speaker 1 (08:47):
Well, and correct me
if I'm wrong, but some chat
applications use the key storeto decrypt, and I mean encrypt
and decrypt the messages, right?
Yes, and you know.
This is the thing.
Right, we're used to runningthe tool and if we don't see
chats for the particularapplication, the assumption is
there's no chats.
Well, don't see chats for theparticular application, the
assumption is there's no chats.
Well, that cannot be theassumption, right?
You need to at least give it aquick look.
(09:07):
And we discussed previously inother shows and we need to do it
again for next year amethodology and what you should
do when you work on phones,right, and one of the things
that I suggest folks do is tryto at least ascertain what has
been parsed versus what hasn'tbeen parsed.
Right, and because the toolwon't tell you.
The tool tells you what it got,it's not going to tell you what
(09:28):
it didn't get.
Even though it's there, it'snot going to tell you that.
So I'm going to make one up,not make one up.
Let's say signal.
Let's assume signal uses thekey store, for example.
Right, if the key store is notthere, the app will show no
signal chats at the chats fromthat app, but it won't tell you
(09:48):
that they're there, it's justthat I can't decrypt it because
I don't have the key.
It's not going to tell you that.
And that's important because,let's say, this issue happens in
your case and there's somechatting applications are
encrypted, you don't have thekey store, you cannot get to
them.
But let's say there's a patchlater or something the tool gets
updated that they're able toget it later.
If you you don't take note ofthat, you won't have the
knowledge to go and maybere-image that again or pull the
key store out when there'ssupport for it, right, right, so
(10:11):
you have to be really aware ofwhat the tool not only what it's
showing you, but also what thetool is not showing you, and
this is one good example of whywe need to do this.
Speaker 2 (10:21):
Right.
Celebrite put out in in theirannouncement too, that they're
working on a solution currentlyfor this, so hopefully other
vendors are as well.
Um, other vendors that extractdata, um and uh, their
suggestion is if theseapplications are are relevant to
your case, right?
So if you're looking for thatsamsung rumen data or sessions
(10:41):
or whatever may not havedecrypted, hold on to it.
When that that fix comes out,you'll have to re-extract it for
that data.
Speaker 1 (10:49):
Yeah, exactly, and
that's the whole point.
If you don't know that thatthere's stuff that was left over
, how will you know?
You need to re-extract it whenthe solution comes.
And again that speaks to takesome time to understand what's
happening with your tools, howyour devices work, because most
examiners I'll be straightthey'll just go and parse it.
Here's what I got.
(11:09):
Were there any signal messages?
Let me see.
Do I see signal here?
No, there's not.
No, there are.
It's just that they weren'tdecrypted.
So you know, you got to bereally careful there.
Speaker 2 (11:18):
Yeah, definitely,
definitely.
I would say too, look.
I mean, depending on what toolyou're using, look in the log
files.
Or, specifically to PA becauseI use Celebrate tools a lot that
trace window.
You can look in the tracewindow and if it's not
decrypting something because itcan't find the key store or
doesn't have the key store,it'll say it right in that trace
(11:39):
window yeah, I don't rememberexactly what it says, but it
literally tells you that itcan't decrypt this database
because it doesn't have the keys.
Speaker 1 (11:47):
Whenever I use PA to
do anything, the trace window is
always open.
I cannot use it without thatthing being open period.
Yeah, I agree, that's just howit is and even the tooling that
I kind of lead on with the leapsand stuff.
That's why we put those errorsin front as it's processing,
because I believe we need thoseto make sure that we can follow
(12:08):
up then on things.
And you know, quick relatedpoint right, and that's fine to
have the trace window and thatwill show you if there's a
problem with things that thetool is supposed to get right,
but it's not going to show you aproblem with something the tool
doesn't know how to get becauseit's invisible to the tool,
right?
So that's why it's still and Imean, of course, do what, what
(12:28):
we do, use the trace window, forsure, but always take a few
minutes, it's not long.
If it's an android device, goto the data data folder in if
you have a full file systemextraction on android and browse
quickly all the directoriesthere.
Those directories are reverseurl bundle, like I'm going to
make one up, comsignal, whatever, right and you can tell usually
(12:50):
what it is.
So at least you can look andvisually check just in case,
because if it's not parsed bythe tool, it's not going to show
it on the trace window.
Also, look at the artifacts ofinstalled apps that the tooling
provides you, but still, Ialways like to look at it myself
, like I don't like trusting thetool just showing me what's
(13:11):
installed.
I'm going to look myself, right, yes.
And what I like to do is Ieither run the
applicationstatedb parsers forthat and, long story short, what
applicationstatedb does is itcorrelates the apps that are
installed and where they are inyour device, all right.
And I also run an ILEA, thetooling that we all put together
(13:33):
, one that looks for somemetadata P-list in each app
directory.
Even if the app's deleted, thatplaylist might be there as long
as garbage collection hasn'tstarted, right.
And you get a list of all theapps that are installed and
possibly apps that wereuninstalled fairly recently,
(13:54):
right, and I always do thatbecause and again, it only takes
a few minutes Sounds like a lot.
It takes a few minutes becauseI want to have that knowledge,
to have a good situationalawareness of the device I'm
working on and what other workmight be pending in the future.
Does that?
Speaker 2 (14:08):
make sense, Heather?
Yeah, it definitely makes sense, and yeah, it's super quick to
run the leaps for looking forthat like seconds, oh yeah.
Speaker 1 (14:17):
And you look at the
report and you quickly see the
app Boom, boom, boom, boom, boom, boom, boom.
And I have a couple of caseswhere I could show that an app
was installed.
The app was not there anymore,but the evidence that it was
installed was there in thatmetadata playlist and it was
really important for the caseand some tools.
I don't think they actuallyshow you that as a report.
So it's on us, it's on us tomake sure that happens.
(14:40):
Just real quick, pop it fromMelbourne all the way out in
Australia.
I don't know what time it is,but I bet it's either really
late or really early, hi.
So thanks for hanging out withus at the opposite time that we
have here.
So New Zealand from Wellington.
Speaker 2 (14:56):
New Zealand Nice.
Speaker 1 (14:59):
One of my favorite
places in the world is New
Zealand, and it was a struggleto get there.
It's not New Zealanders' fault,it's the fault of the United
States aviation system andCarl's Strike, but one of my
favorite places to be is NewZealand.
So good to have you here.
Speaker 2 (15:14):
Yeah, definitely.
I'm still jealous about yourNew Zealand trip, so we'll just
leave that at that.
Speaker 1 (15:21):
Oh, my goodness.
Well, maybe in the future theyhave another event and we'll,
we'll, we'll drag go, yes, yesI'm in, for next time I'm gonna
force my way in.
Speaker 2 (15:29):
So, um, all right.
So telegram updated.
There's policy for telegram.
That's been updated.
Um, I think it was uh on lucakatanishi's linkedin that first
saw this.
He shares some really goodupdates in the digital forensics
community, so if you haven'tconnected with him on LinkedIn,
(15:49):
make sure you do.
But Telegram updated theirpolicy to share IP addresses and
phone numbers with authoritiesif there's sufficient evidence
of involvement in criminalactivities that violates the
platform's terms of service.
So I think that's kind ofopposite of the way a few other
(16:09):
platforms might be headed, butthey are now alerting
authorities if these terms havebeen violated.
Speaker 1 (16:17):
As always, none of
the things that we say here
represent our employers.
We don't speak for them at alland we are opining as another
community member, right?
And that being said, I I thinkit's pretty obvious that when
the ceo gets arrested in france,yeah that might spur some
changes, like again, we don'tknow for a fact that was spurred
(16:37):
it, but I think it might bereasonable to assume yeah, he's
got a couple things, a couplethings that make him look good
coming up in these months.
Speaker 2 (16:44):
I think yeah yeah,
your Honor.
Speaker 1 (16:47):
Look, we are totally
compliant with the police
officers.
Speaker 2 (16:50):
Now you know they do
point out in their updated
policy, though, that they arenot sharing user messages with
the authorities.
So I mean, ip address and phonenumbers are what you're getting
.
Speaker 1 (17:02):
Yeah, I mean up front
, I'm ignorant in how Telegram
works on the backend.
Obviously I haven't really doneany research on it, but most
chat providers they're doingthat now right, they make sure
the encryption is handled by theendpoints and not them.
So, then they can claim hey, wedon't know nothing about it, but
at some point one would assumethat some sort of identifiers
(17:23):
must exist and in this case it'spretty obvious that the IP
address and phone numbers whichI believe are used to register.
Now, does that mean thatcriminals are not going to try
to do something else?
Get behind VPNs, use SIM cards,the burners for sure, right.
But just even having that data,even if it's fake or not, or
fake or whatever, might lead tofurther investigation.
So I mean there's the balancingbetween privacy and the
(17:47):
responsibilities we have, youknow, as citizens and as law
enforcement, to protect, youknow, law abiding folks.
It's something that we'll keepyou know, kind of going back and
forth on, but I think it's agood thing and it's a positive
development.
Telegram is a really popularworldwide chatting application
and among other things.
(18:07):
So, uh, this is a good, a gooduh, development.
Real quick.
Um, jessica is heading over toaustralia.
jessica, hi, good friend, friendof the podcast, so can I go
with you I bet she's staying uplate so she can start getting
into the time zone over there,right?
Yeah, and Bo Dissel, who was inmy class.
So thank you for being thereAgain.
(18:30):
I'm happy that it was of usethe class that I gave on mobile
forensics Not me, well, me, yes,but also two more instructors,
right.
It was a great event.
So thank you for being there aswell and paying attention to my
blathering.
Speaker 2 (18:43):
Awesome there paying
attention to my blathering
Awesome.
Uh, there was also a recent umposting.
I always see everything onLinkedIn, so but posting by MSAB
.
They have um.
Xry is their tool.
They have um, an early releasefor what's a tool called Ram
analyzer.
We've talked about Ram on aprevious episode.
I think it was one of ourreally early episodes where XRY
(19:05):
actually has the capability ofpulling RAM from certain devices
and they're planning on havingthe capability for additional in
the future.
But the new tool developed tohelp you analyze and make sense
of RAM dumps for mobile devices,it is specific to XRY Pro users
(19:26):
, so you'll have to be a Prouser to do that and I have to
share the picture they use withtheir announcement because I
think it's funny.
So they put out theirannouncement, but the picture is
a RAM analyzer with a pictureof a RAM with giant horns.
That just says nice, with amobile phone and a smiley face.
Get it.
Get it RAM.
Got it with giant horns.
That just says nice, with amobile phone and a smiley face.
Speaker 1 (19:44):
But I don't know, Get
it, get it.
Speaker 2 (19:45):
RAM Got it.
But if you haven't had a chanceto check that out, definitely
check it out, because it'sreally cool.
I've had a chance to check itout and it's really cool.
Speaker 1 (20:00):
No, and I wish that.
I mean, I don't have thisknowledge and I don't think
you're going to acquire certainknowledge, that knowledge, but
the knowledge I'm talking aboutis how to decode things in RAM.
Right, I'm not a RAM expert,like by any stretch of the
imagination.
The best I can do is pull RAMfrom a Windows computer and then
, you know, chug it over tosomebody else that knows what
they're doing.
Can I run volatility commands?
(20:20):
Sure, I can do that, right, um,but I'm not really in-depth
memory guy, right, but I, Ithink, would be really good to
kind of, you see how developed,uh, analyzing memory in windows
is, right, pretty advancedvolatility tools, folks that
specialize on that.
Um, hopefully something likethat starts to develop for
android devices because, by theway, this is an android device
(20:42):
thing.
Right, you're not getting RAMfrom iOS, you're getting this
RAM from certain Android devices, as support is provided, and
hopefully we get to that levelwhen we have like a little which
I think they're kind of tryingto build right, like a little
utility for Android RAM, andhopefully folks can go and say,
hey, look, these are thestructures and this is why
they're important, and go fromthere.
(21:02):
So that's something I hope inthe future.
Speaker 2 (21:06):
Previously on the RAM
, when we did the podcast about
the RAM, there were questionstoo like, oh, but you're not
going to get deleted data.
And actually I found oneartifact in the RAM that was
deleted data.
It still remained in the RAM.
It was a Samsung Note, becauseI was doing it on my Samsung
Galaxy and it was a Samsung Notethat I had deleted and I
(21:28):
actually found the content andtimestamp right in the RAM.
Still, it still resided there.
Speaker 1 (21:32):
So there is potential
yeah.
That's amazing.
And if you have a super bigcase and again we don't we're
not haters of any tools, butwe're also not shills, right, we
just tell you what we see rightand in this case we have an
important case and you mightneed to get this tool and check
out memory, because memory mighthave something that's relevant
(21:53):
that you might not find anywhereelse.
Speaker 2 (21:56):
I have to share this
comment because I think I'm
going to do it.
Jessica says if I get to theairport by tomorrow morning,
she'll put me in her suitcase inthe future.
I'm not sure I'll fit in yoursuitcase, but maybe.
Speaker 1 (22:11):
I think you do.
I think you do now, you thinkso.
Yeah.
Speaker 2 (22:14):
Maybe there's a
slight chance.
I'll have to kick all her stuffout of the suitcase, so I'm
going to guess she needs clothesto go over to Australia.
Speaker 1 (22:22):
Look, just put them
all.
You, you wear them, you wearthem and they go in, get in um
on the ram analyzer.
Speaker 2 (22:30):
Quick before we,
before we move on to the next
topic, though, too, there's ayoutube video that um gives a
brief uh explanation of how itworks and, um, I know, in the
release notes for the xryversion that supports it,
there's uh more details on howto actually process it and how
to kind of parse it and view thedata.
Speaker 1 (22:49):
So, uh, if you have
the the capability of looking at
that, go check out the youtubeand then check out the um
release notes yeah, and asalways, we'll put the this on
the show notes and also for thefolks that are listening, the
show notes from the podcast.
They'll be there and also inthe blog for the show.
Speaker 2 (23:07):
Yes, yes, okay, so
this one, I love this topic.
Yeah, so everybody that'slistening right.
Speaker 1 (23:16):
You can, so you had a
chance to go to the bathroom
previously, but you lost itright?
This is a topic that I think isgonna be, you know, a topic
that we're gonna appreciate, youknow, yeah so this is actually.
Speaker 2 (23:29):
There's been a lot of
chatter around ai.
We've talked about ai, I don'tknow how many times already.
I think we're probably going tocontinue.
Um, but so there was an articlethat came out and the title of
the article is expert witnessused copilot to make up fake
damages irking judge.
Speaker 1 (23:47):
Oh, my goodness, and
I've been on this AI binge.
I say binge, but in the lastthree or four days asking to the
community through LinkedIn allthese questions about AI and the
use of new forensics, a lot offolks commenting, and then this
thing came out right, so it wastotally up our alley.
So what happened with that?
Tell us the story about theexpert witness using Copilot.
Speaker 2 (24:09):
Yeah, so it's
actually a court that's fairly
local to me.
The decision came out ofSaratoga County Court, which is
about a half an hour north of me.
It's in between my house and myparents' house and actually we
work with that court all thetime.
But that's kind of besides thepoint.
They ruled that the use of AIas a tool to assist in preparing
(24:35):
an expert damages calculationshould be the subject of a FRI
hearing.
So if you don't know what a FRIhearing is, that is, to
determine admissibility prior to, like an anticipated trial.
Um this case, long story short,um involves, like a dispute
over property and um an expertwas brought in to calculate, uh,
(24:57):
some numbers for damagesrelated to this dispute, and the
expert used Microsoft Copilotto calculate the damages and
submitted the report directlyfrom Copilot and did no
additional work.
Speaker 1 (25:13):
I mean he couldn't
use ChatGPT.
Come on, it's a joke, people.
It's a joke, it's a joke.
Speaker 2 (25:22):
Would it have been
better?
Speaker 1 (25:24):
I don't know.
Speaker 2 (25:27):
So I guess so
specifically in the presentation
and review of evidence anddocuments he used this.
You cannot trust that to writeyour report for you without any
additional verification.
The court I'm going to put up apicture of of something the
court said here.
(25:47):
Well, let's see, and I'll readit to everybody.
But, um, so hold on the court.
Uh, so perhaps the son's legalteam wasn't aware of how big a
role co-pilot played.
Um shoff, who is the the judge,noted that ransom, who is the
expert, couldn't recall whatprompts he used to arrive at his
(26:09):
damages x, his damages estimate.
The expert witness alsocouldn't recall any sources for
the information he took from thechat bot and admitted that he
lacked a basic understanding ofhow co-pilot works or how it
arrives at a given output.
Speaker 1 (26:26):
Wow, yeah, no, and it
gets better right.
Speaker 2 (26:31):
Yeah, so I mean, this
is-.
Speaker 1 (26:33):
But wait, there's
more.
Speaker 2 (26:35):
This is awful in
itself, but apparently,
according to one of the articleswe read on this, the court
entered some prompts intoMicrosoft Copilot, to kind of
like do some testing of theirown they.
One of the prompts they enteredis can you calculate the value
of two hundred and fiftythousand invested in the
Vanguard balanced index fundfrom certain dates?
(26:58):
It returns a value, right.
So then they ask the questionagain, just in a different way
same query, but a different way,and they used a different
computer.
I'm not sure if that would havemattered or not, but it
returned a completely differentnumber with this, basically the
same question.
I mean, these two numbers wereshown to be close to what the
(27:21):
expert had in his report, butthey weren't the same.
Speaker 1 (27:24):
Like they're
different Define close right.
Speaker 2 (27:27):
Yeah.
Speaker 1 (27:27):
No, and for people to
understand when it says the
court did this like it's thejudge, right?
The judge, I guess, whipped outa laptop or a computer and went
.
You know what I'm going to dowith myself.
They just started, you know,clicking at the thing.
Are you kidding me?
Like I would like, if I'm thisguy, I'm, I'm dead, I'm already
dead.
They can pronounce me dead onthe spot.
(27:49):
They can just don't call theambulance, just call the
funerary carriage and just takeme, take me to the, to the.
You know to where the deadpeople go.
I don't even know how his namein english, but are you kidding
me?
Speaker 2 (28:00):
oh my god, that's
crazy the fact that there's any
variation in the number, it'llcalculate with the same type of
question or the same data posedin maybe a little bit slightly
different question.
I mean, I can't stop using thisfor court people, if you are.
Speaker 1 (28:15):
Well, I mean.
Well, I mean I want to say somuch, so many things at the same
time.
Can I, heather?
Can I say a few things?
Go, go, okay.
Can I heather?
Can I say a few things, okay,so, so there's a thing.
Right, we're at the point.
There's a thing you can say,and actually I'm gonna.
Bread is on the chat and breadas another friend of the show,
and not everybody's a friend ofthe show, these are really
special people, right, and I'mgonna read his comment.
(28:35):
Right, some use ai tostreamline their investigations,
create creative insights andthen validate everything ai
suggested, although use aibecause they're lazy, right,
yeah, now, that being, said letme make some comments on that.
I'll leave this best commentthere, because, and then
validate everything AI suggested.
Although, use AI becausethey're lazy, right, yeah, now,
that being said, let me makesome comments on that.
I'm going to leave this bestcomment there, because this is
something Brett and me whentalking in LinkedIn, and some
other people.
Right, I'm thinking about acouple of things.
Right, yes, you can use AI tostreamline investigations, but
(29:00):
at some point are you?
Because, if you cannot trust asingle thing the AI says,
because there's randomness builtinto the AI.
That's what the AI is.
What makes it generative is abit of randomness, and in our
scientific neck of the woods, wedon't want randomness.
We want repeatability.
Right, we want things to berepeated.
(29:20):
We want to know inputs andexpect outputs and, like the
judge did, the judge asked thequestion and the chat GPT, but
co-pilot or whatever the A wasgave you different answers.
Right, that's tough, because atsome point, if this is a small
data set, you can say, okay, Ican check a small data set, but
what if the data set is largeenough that the generative AI
gives you all these sorts ofoutputs?
(29:41):
Right, is it really worthwhilefor me to go through the AI if I
had to verify every singlething, because there's no
validation of the tool right,the process itself, it's a black
box that's unknowable to me.
And so then, what right?
I cannot guarantee that I canverify the inputs with the
outputs, especially when I don'tknow what the inputs might be
(30:06):
right, because I'm doing itthrough the lens of the AI,
right, and I'm still on thefence.
Oh, jessica's doing a greatcomment.
We're going into that, jessica.
I'm still on this.
And look, I don't have a solidopinion on AI one way or another
(30:26):
as a concept, because I see alot of utility.
I'm actually making apresentation for ECPI College by
next month about AI and somethings to consider, both pros
and cons, for JTAL forensics.
I'm working on thatpresentation right now and I'm
still working through thosethings in my mind.
Because do I really want to gowith AI to the court?
Because let's think about acouple of things right, when you
go with your process to court,right Before you get there,
(30:48):
heather, is that something?
Do I hide the ball and keep itto myself and just spring it on
people in court?
How's that work, heather?
Speaker 2 (30:53):
What do you have to
do first.
Yeah, no, you can't just springit on people.
Speaker 1 (30:57):
No, I mean you have
to go as a process called
discovery, yeah, and you need toprovide it to the other side.
And I say other side, it couldbe the defense, it could be
prosecution, it could be twoparties in a civil case.
So the question is well, I tellthem I use AI, and then on me,
on the other side, I'm going toask well, what prompts did you
use?
(31:17):
What were the responses?
What responses did you considerrelevant and which ones you
didn't, and why?
So you validated a few of those.
What tells me, guarantees methat the use of this tool, your
validation, was properly done?
Considering that the tool givesyou wrong things, let's go even
further.
Right, with enough big of adata set, the tool will make
some interpretations, becausethe whole talking to it, right,
(31:40):
the tool has to interpret dumbly.
And I say dumbly because Irecently a certain research done
that llms generative ai.
They don't think like people,think they're really dumb at
math and other things, becauseit's not really thinking as we
think.
They seem like they're thinking, but they're not right.
So the system will try tounderstand in their own way what
(32:01):
you're trying to say and we'llrespond to that.
And not everything is black andwhite, some things are based on
interpretation of.
And then you say well, I'llcheck it.
Well, how much are you going tocheck?
And I'll even say more Rightnow, people don't want to look
for unparsed apps on phones.
They don't want to do it.
They say the tool does it.
And if the tool didn't do it,then it's not important.
Do you think they're not goingto do the same thing with AI and
(32:24):
just calling the examiner lazy?
And again, this is not a dig onBrett, this is just a different
perspective that we're sharing,right?
Because what Brett is saying Iagree with him 100%, 100%, he
Correct.
I'm just providing anotherperspective on top of that,
right?
If I assume that they're lazywhich they are, by the way that
does not solve the problem,because people, the tool will
(32:44):
provide them that and they willdo it.
They will just copy paste itout because the tool gave it to
them.
And tool vendors are reallyfocused on tool capabilities and
they're never focused on toollimitations or they just do
enough to say that they do Right?
Am I lying here, heather?
Am I out?
Speaker 2 (33:00):
of base.
No, you're not lying.
You're not lying at all.
Just to go back to yourdiscovery comment too, though,
like if you're not documentingthe prompts that you're putting
in, keeping notes on that andsubmitting it for discovery,
that's going to be a violationof discovery.
I can see that being thrownright out completely.
Speaker 1 (33:20):
All of the work that
you've worked on, absolutely.
And look, do we use black boxesalready?
For sure, there's a lot ofproprietary data on how the
tools work, right, but the factthat the tools, there's some
sort of validation of thatprocess from the vendor side and
then for yourself as anexaminer, at least you can say
look, I expect, when I putbananas and strawberries and
peaches on this thing, I expecta fruit salad on the other side.
(33:42):
You know, and I seen thatrepeated enough times, that I
know it's going to happen.
Therefore, when I put otherfruits that are unknown to me,
if I see a fruit salad, it'sbecause the inputs are fruits,
right, can I do that with AI?
Is there a process to do that?
And maybe there is.
I'm ignorant, right, but I don'tthink we can compare it and
make a one-to-one comparison toour verification and validation
(34:03):
processes and current tools andthen export that like that
completely, and then say itapplies to AI.
And I'm having someconversations with people way
more smarter than me, forexample, like Jessica Hyde.
She has this deep knowledge onhow some of this AI stuff works
and I've been so happy that shehas some time to speak to me on
it.
I don't understand it yet, butthe fact that there's other ways
(34:27):
of validating and verificationprocesses that need to be
applied to these systems, and Idon't think we are really doing
that as a field, and that's whywe're having all these troubles
and problems when we try tobring it into court.
Speaker 2 (34:53):
And all this nonsense
happens right.
Speaker 1 (34:54):
Let's, let's, let's.
I mean before I say I get it, Iget on my horse.
Do you have something else tothat?
Speaker 2 (34:56):
particular court
continued to ask questions to
copilot.
One of the questions was areyou accurate?
Copilot generated the followinganswer I aim to be accurate
within the data I've beentrained on and the information I
can find for you.
That said, my accuracy is onlyas good as my sources, so for
critical matters, it's alwayswise to verify.
The court followed up with asimilar question.
(35:19):
Instead of are you accurate?
It asked are you reliable.
It got a completely differentresponse.
Co-pilot responded with you bet.
When it comes to providinginformation and engaging in
conversation, I do my best to beas reliable as possible.
However, I'm also programmed toadvise checking with experts
(35:39):
for critical issues Always goodto have a second opinion.
And then there was one morethat I highlighted additional
follow-up question that asks areyour calculations reliable
enough for use in court?
When asked, co-pilot respondedwith when it comes to legal
matters, any calculations ordata need to meet strict
standards.
Speaker 1 (36:09):
I can provide
accurate info, but it should
always be verified by expertsand accompanied by professional
evaluations before being used incourt.
Yeah, the AI is more aware ofthe limitations.
Yeah, it's more aware than someof the experts that use it.
That's insane, definitely.
Speaker 2 (36:19):
But I just I find the
difference in answers between
are you accurate and are youreliable, like I mean I feel
like I should return the sameanswer on that, but it was
similar, I guess.
Speaker 1 (36:32):
I mean, it's not the
best analogy, but at Atlanta
what they do is imagine a trainright and there is no train
tracks.
Right, the train is going atfull speed and the train is
putting a train track in frontof the next one as it's going
really fast.
That's what a lot of elementsdo right.
It kind of builds the traintrack as the train is moving
right, or it's building theplane as the plane is flying and
(36:53):
based on a background ofinformation of what flying
should be and how should you flyor how the train works.
Right.
Which leads to the point thatJessica made in the chat a
second ago.
She says that I'm sorry, I putthe bad, did not the right
comment hold on?
She says validation, theresults don't remove the bias
(37:14):
right, because you only showaffirmative responses.
And I want to unpack that realquick from what I understand
what she said right.
First of all, how are thesesystems trained?
Where did the data came from?
Right?
Does the data?
Okay, wait, stop, stop thepresses.
Can you put your mug up to thescreen again so we can bask in
the glory of your cup of yourmug?
Speaker 2 (37:35):
I just wanted to
drink.
Speaker 1 (37:37):
We need to show your
mug.
Your mug says protect, attack,take naps.
It has baby Yoda Grogu in herbig cup.
Speaker 2 (37:46):
It is a big cup.
Speaker 1 (37:47):
Yeah, we need to
mention that it's bigger than
your face.
It's great, all right, thankyou.
Speaker 2 (37:50):
Thank you for
pointing that out.
Back to my rant.
Speaker 1 (37:53):
Okay, so, yeah, so
how are these systems trained?
Based on what data?
Right, If you give a data inregards to a particular segment
of the population, that meansthat other segments of the
population are not representingthe data set and you're going to
get really problematic resultsfrom it.
Right, and you know, especiallyif you're only focusing on the
(38:16):
answers that you care about,what about the ones that you
don't right?
Well, this one works for me.
Well, that bias that's built inof the data set that was tested
with you don't even see itright, and that's another big
issue that we need to talk about.
Right, and that's not eventalking about saying, okay, the
(38:37):
LLM was trained with data fromthe internet, right, but some of
that data they took it withoutthe owner's permission.
Speaker 2 (38:42):
Oh yes.
Speaker 1 (38:43):
Okay, so imagine this
.
This is my analogy right.
Imagine I go to court and I useall these expensive tools but I
did not pay for them.
I pirated them.
Or I pirated my office on myWindows copy that I did my
examination on right and thedefense get a hold of that.
What do you think is going to?
happen, you know you know what Imight be?
(39:04):
I might get prosecuted for it,right?
Like we expect to do, to dolegal work in a legal way, Right
?
So what happens with some ofthese LLMs are fed on a way
that's not transparent, in a waythat's obscure or not even
respecting the rights of thecreators of the original content
, right?
How's that bad apple, thatpoison apple, filtered down to
(39:28):
my case if my case is built onan LLM that uses data?
And I'm saying that because Idon't know, I don't have the
answer.
I need some legal minds tostart thinking about those
things.
Right, I don't have the answer.
I need some legal minds tostart thinking about those
things.
Right, because we're taskingthe AI based on knowledge that
it shouldn't have.
Even more so, let's discussabout knowledge that we
shouldn't have.
Let's say, I ask the AI to lookevidence for a particular crime
(39:51):
and the AI says, yeah, there'sevidence for this crime and also
, adjunct to it, that there'sother crimes.
Did I have search warrantauthorization that there's other
crimes?
Um, did I?
Did I have search warrantauthorization to look for other
crimes?
And you?
know the ai did it.
It wasn't me, it was the ai.
I didn't tell him to do thatright guaranteed so yeah, so how
(40:14):
do then?
the question is, how do we make,put some safeguards that are,
uh, in the code automated forthe systems?
How do we make some safeguardsthat are in the code automated
for the systems?
How do we validate in a waythat's actually representative
of how AI works, that we're nottrying to copy a system that
works for our forensic toolsthat are pretty static, that
they take data, they just datastructures, they go about and
(40:36):
parse them out and then you makethe interpretation when you put
this filter, with the AI kindof giving you a few
interpretations before you makethe final decision.
How do we validate and verifythat data in a way that's
scientific and presentable atcourt, right?
You know we always laugh aboutthe oh, look at the expert.
He put it without checking.
Ha ha ha.
Let me tell you, I believe Imight be wrong.
(40:58):
I hope I'm wrong, but I believethere's a lot of issues that,
even if you do your best attemptof using these tools, you might
get burned because these toolshaven't been, have not gone, I
believe, enough to this processof test, say testing in the
sense of going being used in thelegal process, being used for
casework, both civil andcriminal.
I don't think that tool's beenapplied enough in this fields of
(41:21):
knowledge for us to kind of saywe know what the outcome of
this situation is going to be.
I don't think we're there yet.
What do you think?
Speaker 2 (41:28):
They haven't been
around long enough to work out
all the bugs, definitely.
Yeah, we haven't evendiscovered half of the bugs that
are going to be discovered inthe future.
Speaker 1 (41:39):
So, yeah, yeah, let
me put this comment from Brett
right, so AI should be putthrough the scientific method.
It's like asking a question ofsomeone you don't know.
You have to verify the answersindependent of AI, and that's
correct right Now.
My expansion of that point iswell, the thing is that the
scientific method right, andit's about asking questions and
expecting some answers and thenextrapolating into the past and
(42:02):
into the future because of therepeatability of the thing.
Right, the scientific methodonly works because we know that
things are stable.
Right, gravity is not going toall of a sudden not work, right?
We know?
Yeah, you know what I mean.
We know that electrons behavein a certain way and the protons
and neutrons behave in acertain way.
Therefore, I can make somepredictions in how things came
(42:24):
to be.
From the nucleus of stars, youknow, fusioning hydrogen.
Does that make sense?
But when you have a system, thatat the heart there's randomness
, and randomness in a way thatwe don't, with this our methods
cannot quantify, right?
Do we know what the error rateof our LLM is with certainty?
(42:45):
I don't have the answer.
Maybe there is.
Again, I'm ignorant.
So take this with a big grainof salt, not a rock of salt from
me, right?
This is me talking with a lotof ignorance.
I don't know what the errorrates are.
How do you calculate them?
What's the confidence boundswith LLM outputs in regards to
what they're saying?
Speaker 2 (43:02):
Do you think maybe we
should just ask Copilot or
ChatGPT what their error ratesare?
Speaker 1 (43:06):
I mean, maybe We'll
see what they say.
Well, actually, yeah, how dothey?
Speaker 2 (43:11):
do it and send in
your different answers that you
get.
Speaker 1 (43:15):
But that's the point.
Most likely we'll get differentanswers, right?
Yeah, and that's why I'm sayingthe scientific method is true,
we need to apply.
So again, brett, it's correct.
But the point I'm expanding onis the scientific method.
It can't be, from my opinion atthis point.
It's a tentative opinion, okay,people Cannot be.
I'm going to do it the same wayI do my other tools, where I
(43:36):
look at the inputs and theoutputs and if they work, that's
great, if they don't, I'm justgoing to chug it.
Because what about discoveryissues, right?
What about impeaching theprocess when your only bias your
bias is being reaffirmed byselected things that work and
then totally dismissing the onesthat don't?
Doesn't that tell you somethingabout the process?
Of course it does, especiallyif you're on the other side,
(43:56):
right?
So that's why I see thescientific method should be
applied.
The question is, how does thescientific method look applied
in these circumstances, with asystem that has randomness built
in it and repeatability is notguaranteed?
Hallucinations are a big thingwith this type of system, where
they just make things up andeven if you say, well, my LLM
(44:18):
actually references the sourceto make sure it's correct, what
tells you that the reference iscorrect or that the
interpretation of that sourcedata is correct.
And then the last part.
Well, you verify it, sure, butwhen you have 100,000 of them,
are you going to verify them oneby one?
Am I gaining anything?
Now by using AI For certainprocesses?
We might not be gaininganything or any speed.
(44:38):
We have to verify because wecannot really validate the
process.
So we have to verify everythingright and we have to be careful
with those terms verificationand validation right.
Is there a gain there?
Some areas, I believe, do get alot of value from AI.
When you want to express anidea and or create.
(44:58):
What is it that you createdrecently, heather, with ChatGPT?
What was it?
Speaker 2 (45:03):
Oh, I actually I'll
take things and summarize them.
I'll put like something I wantto summarize in like bullet
points.
I just did it for a PowerPointpresentation work and I'm like I
know what I want to say and,honestly, sometimes it just
makes what I've already createdsound a little better.
So I'll use it to make thingssound better, yeah.
Speaker 1 (45:20):
I mean and I see
value in that.
Well, no, I mean, and there'svalue that, of course, don't
come and face it, but you'regoing to sound like a robot,
right, but there's some value inthat.
But for certain processes, howwill the scientific method be
shown to work in these systems?
And I think everybody thinks,oh, check the output and that's
it.
I think the courts are notgoing to just be happy with oh,
you checked it and that's fine.
(45:41):
Yeah, okay, how do you check it?
Where's the negative results?
Like the judge said, explain tome how this works.
Well, you know I'm going togive you some weak explanation
If I'm on the other side,whatever that side is, I would
like to really know more.
And, heather, we're not talkingabout about cases, but we've
seen cases when the expert onone side is just has cursory
(46:04):
knowledge on the field oh,definitely and the other side
goes at it and they go oh yeah,well, define this, define that,
how does this it like?
and they don't have answers.
That's horrible.
It looks horrible when youdon't have those answers.
Um, so yeah, I mean, there's alot good, there's a lot of a lot
of bad, right, and look, youknow, jessica has some great
(46:28):
comments.
Ai tries to be a people pleaser, right yeah.
And and doesn't providecitations.
But even if they do, I'm goingto question them.
And again, do I get a timesavings on that right?
That's still an open question.
Speaker 2 (46:45):
And about Go ahead,
go ahead.
Go ahead when the people placetheir comment too.
So I have asked ChatGPT thingsin the past and I know the
answer's wrong.
I can tell by reading it andI'll write no, that's not right
and I'll like reformat myquestion.
But when you write to it, no,that's not right, this is what
(47:06):
I'm asking you.
It actually does apologize toyou for getting the answer wrong
.
So it's definitely a peoplepleaser.
Speaker 1 (47:10):
Um, just a little
tidbit, yeah, no, and, and, and
that, and.
Again, that's also a bias,that's with the system, right,
and uh, and.
But we could.
We've done other shows on alittle bit, a little tiny bit,
about bias, and I think weshould do one in the near future
again, how's that?
Look for people as well.
But um brett makes a good point.
Right, ai output will never berepeatable.
Of course not.
You can get the same inputs orprompts and it will tell you
(47:32):
different things or related, butnever the same, right, or it's
really rare.
Right now he explains that,yeah, the answer can be put
through a scientific method.
Right, you can say, okay, thisanswer from the AI, is it true
or not?
Yes, you can do that, right,but then again, that's the whole
point.
Right, at what point am Ihaving any gains here?
What am I gaining, right, whenI have to, like I don't want to.
(47:53):
For example, if we're going tomake it with tools.
Right now, you validate yourtool that it chats SMS, mms
messages.
Right, you validated that andyou verify the outputs with
known data.
Now you put unknown data and itgives you the chats.
What do I do?
I hear the chats here they arethe really important ones, maybe
the two or three.
I will go into the databasemyself, put an eyeball on it and
(48:15):
go from there.
Why?
Because the tool was validatedand I verify with known data.
Right, right.
Imagine if I had to verify everysingle line of that chat.
Because I cannot trust the toolto have pulled the things
properly, because I cannotverify it from the get-go.
I cannot verify that process.
The process is so obscure andrandom that I cannot have a
(48:36):
scientific verification.
Is it feasible?
I believe it's not right, andthat's the thing as we get more
into these use cases for thistype of tooling in digital
forensics.
Again, we're talking aboutdigital forensics specific
issues here.
I don't know how it looks inother fields of knowledge, but
do I want to verify?
My opinion is that most peopleare not.
(48:58):
They're going to think it'slike using other tools where
they parse the chats, and herethey are.
Then what I have?
To go through each and everyone of them and don't tell me
that you trust that you trust it, because you can't and you're
not going to go through the 500chats that the generative AI
supposedly found, because youhave no proof from beforehand
(49:18):
that is consistent.
Hopefully that doesn't makesense to you.
Speaker 2 (49:21):
Am I am I talking out
of hand.
What do you think?
It definitely makes sense.
I agree with all the pointsyou're making and actually,
malik, uh, just put up a chat.
The biggest question is, uh,the algorithms.
Are the algorithms used by thetool compliant with recognized
forensic standards and practices?
And that kind of goes alongwith everything you were just
saying.
I agree.
Speaker 1 (49:41):
I mean, do they exist
?
Do they exist how?
Speaker 2 (49:44):
do we even measure
that?
I don't know.
Speaker 1 (49:46):
What are the
standards and best practices for
AI use in digital forensics?
I don't know them.
Maybe they exist.
Again, I think I can speak foryou on this.
We can agree that we just don'thave enough knowledge to make
that statement that they don'texist, right, but I haven't seen
them.
And if they exist, they need tobe popularized quick and if
they haven't been made, theyneed to be made.
(50:07):
Like, for example, jessica Hightalks about F1 scores.
That's something that I toldher she needs to teach me on,
because that's all the ways ofkind of measuring, that
validation of tooling.
In regards again, I don't havethe knowledge, but in regards to
probabilistic analysis and howyou can have a confidence
interval, that's good withdealing with data that's
probabilistic in nature.
(50:27):
I am ignorant on that.
I'm really looking forward tohaving a conversation with
Jessica and we actually, youknow she's so busy and she's
been so kind and kind of puttingme in her calendar in the
future, so I'm looking forwardto that talk.
Speaker 2 (50:38):
Who conference me in
her calendar in the future.
So I'm looking forward to thattalk.
Who?
Speaker 1 (50:40):
conference me in on
that.
Well, I don't see why not.
We'll ask Jessica in a second.
But the point I'm saying withthat is, yeah, there has to be
something.
Again, I don't believe we cantake our regular way of
validating and verifying thingsin DF and just move it over to
AI.
I think we're good because wemight stub our toe without
expecting that we AI.
(51:01):
I think we're good because wemight stop our toe without
expecting that we might becaught by surprise, you know,
and look, oh look.
I'm sorry, but one more thingI'll let you talk.
Speaker 2 (51:05):
I was just going to
put it up.
That's what I was going to do,thank you.
Speaker 1 (51:07):
Well then, read it.
Please read it, because Brettreally, really kind of, he
really summarized my point there.
Can you read that from Brett?
We are taking a technology, aithat is not developed for DFIR
and forcing it in DFIR.
Boom, and you know what?
Speaker 2 (51:26):
All the things I've
been saying for the last 10
minutes.
You just did it in one sentenceyou just did it in one sentence
.
Speaker 1 (51:32):
That's exactly what
it is.
That's exactly what it isDefinitely.
And did we put the comment thatthe order of the judge?
In this case, we're discussingwhat was his order Do?
Speaker 2 (51:40):
we have it.
You know I don't, I don't haveit, but I do.
You know what it was.
Speaker 1 (51:44):
I think I have it.
Let me see if I can.
If I can, I can show it I havethe court case up on the screen.
Speaker 2 (51:50):
Somebody was asking
in the chat about putting that
court case up.
It's up on the screen.
It'll also the show notes yeah.
Speaker 1 (51:56):
So actually, you know
what I think I might be able to
?
uh, I'm gonna share my, share myscreen, because I want, I want
to read it, I want people to see, to see what his order was in
regards to his conclusions, to,to this, to the use of this
thing.
So let's use.
Here here we go, so share.
(52:32):
So the judge said can you seethat?
Yeah, all right.
The judge says is admitted incourt, admitting that the court
has no objective understandingas to how Copilot works.
Schaaf suggested that the legalsystem could be disrupted if
experts start overly relying onchatbots en masse.
But what is the marketplace forthese tools doing?
(52:52):
What are they doing?
Speaker 2 (52:53):
Yeah, adding it.
Speaker 1 (52:55):
They're pushing it
Like really really a lot.
They are pushing it, pushing ithard yes, you're going to be
using this.
They're pushing it Reallyreally a lot.
Speaker 2 (52:59):
They are pushing it
hard.
Yes, you're going to be usingthis.
It's going to make your life somuch easier.
Everything's going to get donefaster until you get to court
and your whole case is thrownout because you relied on it too
much.
Speaker 1 (53:11):
So I think and I
guess it's my kind of finishing
point on this on this field, formy part, is you have to be an
expert.
This guy is an expert, and, andand he did not use his
expertise in a way that wassatisfiable to court.
And that's not me talking.
That's what the judge said,okay, so don't get me wrong.
(53:33):
I'm not talking for anybodyhere.
That's what the judge said.
Right now, I look at myself.
I need to know how the devicesthat I analyze work, how the
data structures are done, howthey're parsed, and then maybe
then I can look at chat, gpt, aior LLMs as something that might
(53:55):
help me in some way.
I don't think we're reallythere yet to be having this
widespread use.
I don't think so.
And again, for the legal issuesin regards to discovery, in
regards to validation, inregards to constantly having to
verify every single thing thatthe LLM comes up with, and for
now I'm staying away from it.
(54:15):
I'm not saying that you shouldor should not.
That's for everybody to decideon their own.
What I will say is that we needto actually look into these
issues, continuous conversations, be in touch with organizations
like SWGD, right, s-g-w-d-ethat they're building frameworks
to look at these things.
Follow folks like Jessica High,brett Shavers, that are really
(54:35):
smart on these things.
Don't jump like.
Don't jump into the water withyour eyes closed, right, it
might be frozen, all right, Idon't know.
What do you think?
Speaker 2 (54:48):
No, definitely.
I 100% agree with all of that.
It's too soon to be overlyreliant and I look forward to
future research on maybeanswering some of the questions
that you just asked, because Iagree with you when you say I
just don't know, and I feel likeI'm the same way.
I just don't know everythingthere is to know about it and it
(55:08):
really makes me, I guess,nervous to trust it using it in
casework at all.
Speaker 1 (55:14):
So yeah, and I don't
think I have the time to have to
validate every single thing.
I might as well just do itmyself.
Speaker 2 (55:20):
Right, exactly, I'll
just do it myself.
Speaker 1 (55:23):
But you know again,
if you find value on it.
I'm not saying you shouldn'tuse it.
Just make sure you're aware ofthe limitations and that you're
mitigating those limitations andyou're complying with your
discovery issues.
And let your prosecutors oryour lawyers that you're working
for you're in the civil sectorlet them know what you're doing,
because the time savings youmight get from that use might
(55:46):
cost you your reputation as anexpert.
And in this field yourreputation is everything.
There's nothing else.
There's nothing else Can you betrusted to?
Like Brett said somewhere elsein the chat, the AI doesn't do
this.
And swerve to the contents ofthe report.
The AI doesn't testify.
You are the ones throwing to it, you're the one testifying.
And if you stub your toe withAI, your reputation is the one
(56:09):
that's going to be dead.
And if your reputation is deadin this field, guess what?
You're out of the field.
Speaker 2 (56:14):
You may not be
working cases at all anymore, at
all.
Period be working cases at all.
Anymore at all.
Yeah, period, yeah, definitelyso, wow, craziness, all right.
So yeah, we'll have more talkson ai in the future.
Speaker 1 (56:24):
I am 100 sure of it
oh I, I enjoyed this, this
little second minute.
Speaker 2 (56:29):
It was my like, like
it was great well, now I'm going
to shift gears because I wantto show you guys, guys, a new
tool that was released this week.
It's called iCatch and it'screated by Aaron Wilmarth.
He had a need to create a toolthat works well with the iOS
(56:52):
Cache SQLite database.
So the Cache SQLite database islike the main storage for the
Apple locations on an iOS device.
He didn't like the way any ofthe tools were displaying it.
He didn't like how it was, Iguess, being displayed in Google
(57:15):
Earth from the exports fromtools.
So he started to do research oncreating KMLs, pieced together
some scripts that he had usedand the new stuff that he was
learning on creating KMLs, andhe made a tool and he released
it to the public, I think just acouple of days ago.
In my office we have beenlooking for something like this,
so I was super excited to tryit out.
But what iCatch does is itstands for iOS Cache Analysis,
(57:40):
for Tracking, coordinates,history.
It's a utility that processesthe iOS Cache SQLite database
and creates a timelined KML mapfor use in Google Earth.
So I'm actually going to showthis.
Speaker 1 (57:55):
And just a quick side
note here If you're not
familiar with the Cache SQLite,that's one of the premier
databases in iOS devices inregard to geolocations.
It's pretty, really accurate.
It has a whole bunch of gooddata there and if you're working
on a phone, an iOS device andyou haven't looked at that
database, you're missing out.
You have to process those.
Speaker 2 (58:16):
Yes, so I have up on
the screen the um, the GUI, the
interface.
Um, it's uh, you just have todo one line of script to install
requirements to run this, butthere's an executable already
compiled for you.
Um, so that's what I'm showingon the screen.
On the screen You'll put inyour case uh details.
So I just kind of pre-filledthis out.
I have a New York Police.
(58:38):
I did Examiner Heather, thecase number 12345, device info I
put iPhone 7.
Then what the tool is lookingfor is it wants the database
path.
So you'll be exporting yourcachesqlite database out of your
extracted data and pointingthis tool directly at the cache
SQLite and then you just choosean output location for the
(59:02):
different file formats that itcreates, which it will generate
a log file based off of the toolprocessing.
It generates a CSV file withthe information contained from
the cache SQLite and the KMZfile for ingestion into Google
Earth.
On the interface you can choosewhich icon color you want.
(59:22):
There's red, green, blue,yellow and purple currently.
I'm just going to leave it redas the default.
And then there's a date timefilter.
With the Cache SQLite, ifyou're not familiar with it, it
stores thousands, tens ofthousands of data points and
they're very rapid fire.
I'm going to limit this just toone hour, and it still is a ton
(59:46):
of data points.
If you try and point this atthe entire cache SQLite and
create your KMZ, you're going tocrash Google Earth, so just.
Speaker 1 (59:53):
FYI yeah, unless you
have some industry, like you
know, strong mapping applicationlike that.
But yeah, it's not going towork.
You have to limit thosetimestamps for sure.
Speaker 2 (01:00:06):
So I used Josh
Hickman's iOS 17 image.
Thank you, josh Love, that thatwas available to use and I just
narrowed it to a date and timethat I knew he had location data
for.
So July 24th, from 11 am to 12.
Then you just click generateoutputs.
Once you click generate outputs, as soon as it is done, a box
(01:00:29):
pops up and says you can't seeit.
But a box pops up and says CSV,kmz and log generated
successfully.
Do you want to open thedirectory?
So I'm going to open thedirectory and let me share that
screen.
So what you get, you see herein my directory the log file
(01:00:50):
related to the process, the CSV,that contains like the
timestamp, the latitude, thelongitude, the accuracy, and
then you have the KMZ file thatis ready to go in Google Earth
and I actually preloaded it.
So I won't take too much timeto share the screen here.
Let's see.
Speaker 1 (01:01:11):
We're going to
overtime, but it's totally worth
it, so stay with us.
Speaker 2 (01:01:16):
So this is what it
looks like, mapped out.
He has it set up so that eachdata point is identified by the
record ID.
Sorry, I couldn't think of itthe record ID in the database.
And here you see where I'massuming Josh traveled between
(01:01:36):
that one hour on July 24th.
Let me just zoom in here.
Ah, it worked Good.
Oh, it didn't work, all right.
Well, I'm just going to tellyou, because over on the
left-hand side you'll see theactual record ID and then above
each record ID there's anotherbox that can be checked, called
(01:01:58):
accuracy for record, and thenwhatever number it is Aaron has
in the KMZ file, if you checkthose accuracy check boxes,
it'll actually show you thecircle of accuracy around each
point.
So I actually want to show thatI'm going to uncheck and just
check all so that we have thoseaccuracies, and then we'll just
(01:02:19):
pick a random one and we'll zoomin on it.
I'm actually going to stop thisshare and share it with the
other option so you can see thewriting as well.
Speaker 1 (01:02:30):
Yeah, please, yeah.
Speaker 2 (01:02:33):
I just chose window
instead of entire screen.
There we go.
So on this particular point, ifwe continue to scroll in, you
can see that GPS data point andin the box there's information
about that record ID.
So it's got my information thatI put in about my case, the
(01:02:53):
timestamp, which is in UTC, bythe way, so that's something to
take note of.
And then latitude, longitudeand accuracy.
For this particular point theaccuracy is four meters, so you
can see that accuracy circlearound the point.
I'm just going to zoom in alittle more here on the road.
So I think this tool is prettyawesome.
Speaker 1 (01:03:18):
Yeah, and for folks
that are not familiar with this
type of analysis, that accuracy,that circle tells you that that
device or whatever it is, wassomewhere inside that area.
Right, the larger the circle,the less accuracy you have,
because now that device couldhave been somewhere in a big
circle.
But the smaller the circle, thebetter it is for you to say
(01:03:40):
look, we're pretty confident itwas here and not there.
Speaker 2 (01:03:45):
Yes, yes.
One last thing I want to show.
I'm going to zoom back out, solet's get zoomed way out here,
also built in, and I don't knowif it's going to go backwards or
forwards for the first time,but we'll try it here.
Speaker 1 (01:04:01):
Let's do it.
Speaker 2 (01:04:02):
Is the doing it right
now.
There we go All right, If I hitplay and I think I have this
set to be on a loop it's goingbackwards or following the path
that Josh took with his testphone on that day.
Speaker 1 (01:04:18):
Yeah, and for folks
that are just listening, what
Heather did?
She took Google Earth and usedthe functionality to kind of
play those points and now she'sdoing it kind of showing all
right, the phone moved and youcan set the speed, how fast you
want it or not, but it's puttingthe dots on the map.
This is good, because nowyou've got directionality right,
you have a blob of dotseverywhere, all right.
So what went first, what wentsecond?
(01:04:39):
Right?
Do I want to go and look at thetimestamp one by one?
No, let's just hit play and thedots appear in the order they
were recorded on the device asthe device was moving along the
surface of the earth.
So that's awesome, and I thinkAaron is in, is in, is watching.
Speaker 2 (01:04:56):
Yeah, I see him.
Speaker 1 (01:04:57):
Yeah, and he said
that.
You know, he really hopes thatit helps some people out in
their exams and I believe Ibelieve it will already focus on
the chat, saying that he'sgoing to use them at their work,
they're going to use it fortheir master's thesis, like
that's immediately their folksin the chat right now finding
value.
So, aaron, we appreciate youand, for folks that are
listening, don't be afraid ofsharing what you know and what
(01:05:20):
you have right.
Do we know about iCache I'msorry, the Cache SQL databases?
Sure, have we mapped it before?
Sure, but this particularimplementation and how he did it
and the accessibility, we stillneed it, right, and just
because somebody knew about usand me, oh well, I'm not going
to do anything about it.
There is value on yourperspective, even if it's a
(01:05:40):
topic that's known.
Does that make sense?
Heather?
It does so.
Please, folks, if you'relistening, you have.
Well, I had this idea, but I'veseen it done before in a
different way.
It don't matter, put it outthere.
Speaker 2 (01:05:52):
It's going to help,
definitely.
I think too, with this tool andI I chatted with aaron a little
bit about it, but futureconsiderations I hope that he'll
implement um support for otherdatabases right.
So this is the cache.
This is great for the ios, it'slike the main database, but I
think um life 360 could uhbenefit from this with the
(01:06:13):
Life360 locations in Android oriOS, and I mean there's a ton of
other databases that recordlocation data that this tool
really could work well for.
Speaker 1 (01:06:23):
Oh, absolutely.
And how he leverages, howGoogle parses KMC right, yeah,
yeah, the KMC is fantastic.
Look at that.
The accuracy and all the beable to navigate to the points A
really really good job, so welldone.
Speaker 2 (01:06:38):
Yeah, it's awesome
and I'm already using it at work
.
I love it.
I was looking for this.
Speaker 1 (01:06:45):
There were a lot of
great comments in the chat and I
really apologize for the folksthat we cannot make.
Make them all right now.
Put them in the chat because werun out of time.
But if you're listeningafterwards or watching
afterwards, there's a greatvalue in being here live,
because they interact withreally smart people here in the
chat and you're going to learn alot, even more than than what
(01:07:05):
we could try to impart or sharewith you.
The folks in the chat are great.
So again, thanks for kevin andjessica and and brett and and
all the other folks in in thechat that are asking questions,
and we apologize if we can getto them because time kind of ran
out, I know it's my fault.
Speaker 2 (01:07:23):
I talk a lot.
Several of our topics are nowgoing to be pushed to the next
podcast, because it has been anhour and seven minutes already.
Yeah, but it was a great hourand seven minutes.
Speaker 1 (01:07:35):
I believe so, I
believe so, I believe, yeah me
too, me too.
Speaker 2 (01:07:38):
Um, I am gonna do the
meme of the week even though
we've gone over um and Iactually have two.
I have two because it ishalloween time, um, and so we
have to do both of the halloweenright, so that the next um
podcast is supposed to fall onhalloween.
But alex has kids, so we can bedoing that.
You have to gotrick-or-treating.
Speaker 1 (01:07:56):
Yeah, so we'll do the
.
Speaker 2 (01:07:57):
Halloween memes now.
Speaker 1 (01:07:58):
Yeah, either my kids
will kill me or the wife will
divorce me, so I need to dotrick-or-treating with the
family.
Speaker 2 (01:08:04):
I think both would
probably happen yeah.
Speaker 1 (01:08:09):
I want a little bit
of that candy.
You know the parent tax.
There's a tax in my house fromthe parents, A percentage has to
come to me as the parent.
Speaker 2 (01:08:18):
So go ahead and
explain our memes.
Speaker 1 (01:08:21):
So you got two folks
dressed I say folks, but two
kids dressed as ghosts.
One is a better well-dressedghost than the other one, but
you have to watch it.
The point is that one getscandies in the bag and the other
one gets a rock right, and Ithink a lot of us can relate to
that rock right, the the.
The text says my friend fromanother agency got a talino box,
(01:08:44):
a mac lab bag, book, laptop,tons of removal media I can't
spell, but tons of removal mediaand me, I got a rock, I I got
nothing.
And at some point, you know, wealways, you know kind of
hunting for parts to make thingswork when they break.
But you know, that's part.
The main thing is the missionand we try to do, you know, do
(01:09:08):
the right thing with the toolsthat we have and we will make it
happen.
Right, and actually I have to goback for a second and there's a
comment I have to share fromfrom brett and um.
He, he was saying this is onelast little thing because I
liked it a lot.
Um, let's see if I can find it.
He was saying that injusticeoverrides any person's
reputation.
Right, and at the end of theday, reputation is reputation is
(01:09:30):
.
Reputation is important.
But but if your work and yourcarelessness is an injustice
committed on somebody or somehow, then that's way worse than
what I think, my think about you.
Our work is a work of truth,right, and the truth goes before
any of our reputations.
(01:09:51):
So that's a great point.
So, yeah, so that's the firstbeam going back.
Yeah, sometimes don't get whatwe need, but it's okay, we'll
make it happen, no matter whatthe mission, the mission will
get accomplished, whatever ittakes.
Speaker 2 (01:10:03):
And then we couldn't
have a Halloween go by without
sharing the law enforcementdigital forensics examiner
Halloween costume.
This is a classic.
Speaker 1 (01:10:16):
We will be showing
this meme for the next 90 years
or until we turn 90, because wewon't be here in 80 years by the
time we turn 90.
So we have a person here, a guy, right, dressed with boots,
5'11" khaki pants, you knowtactical pants, like the
tactical tactical belt, thetactical polo or the tactical
(01:10:37):
shirt.
If it's a shirt, the sleeveshave to be rolled up, of course,
with a tactical shunto or youknow g-shock type of watch with
the ball cap, all right, andthat's the classic guitar
forensic summer uniform in thewhole planet.
Okay, right, so your costumecomes with that outfit, right,
your 511 pants for both officeand lab work.
(01:11:00):
Right, because even we have ameeting with management,
everybody will be in a suitexcept us.
We'll still be in our, in ourtactical pants and our polo
shirt.
That's just how the world works.
Okay, we're gonna have pantsthat might have no ink.
We will have a right blockerkit that we carry around that.
We haven't updated the firmwaresince 2015 because nobody
updates the firmware which testthis, this.
(01:11:21):
We need to update our firmware,okay, yeah, and and you know,
this is really relatable, Ithink, because I can go into a
place and I have a good feelingof who the examiners are, based
on how they're dressed or howthey behave or the different
equipment they have near them.
Right, definitely, even if yougo to private sector, you're
like oh, I don't do that anymore, but you did, I know you had it
(01:11:42):
I know you're dressed like this.
Don't deny it.
Speaker 2 (01:11:47):
I think it probably
takes a little while to break
that habit once you move frompublic to private sector as well
.
Speaker 1 (01:11:51):
I don't know yet.
I wouldn't know.
Yeah, I wouldn't know yet.
I do wear a lot of polo shirts,but they're all from like oh
yeah, I mean I hope.
I'm wearing polo shirts from abunch of conferences because you
know they're free and they'regreat, Right, so I wear those in
the chat about the link foriCatch.
Speaker 2 (01:12:15):
I'm going to throw it
up here real quick, so it's
there while we say our goodbyes,but it'll also be in the show
notes, and I shared it onLinkedIn.
I think, alex, you did too.
Speaker 1 (01:12:22):
Yes, I shared it on
the chat.
The chat doesn't go out toLinkedIn.
So, if you're on LinkedIn, youmight not see that in the chat,
so look at it on the screen, oryou can go look at the show
notes afterwards in YouTube orin whatever podcast directory of
your preference, and you canget all the links for the show
in one of those?
Speaker 2 (01:12:42):
Yes, okay.
Speaker 1 (01:12:43):
Yeah, I mean Jeremy's
saying that we can make the
show longer, but my kids arecalling me.
That's why we cannot make itthat long.
Speaker 2 (01:12:49):
Oh yeah, no, there
goes, the kids mad at you and
divorced again.
Speaker 1 (01:13:03):
No, I mean, look, we
do the show first of all because
we like it, but also because wealso appreciate the community
that's been built around it.
All you guys and gals, commentsand insights, we really
appreciate them.
I do believe the guy in thefellow pants gets things done.
So, yes, Matthew, that'sactually correct and again, we
appreciate you.
So again, we're not going tohave a show for Halloween
because we're going to be doingtrick-or-treating and do other
things, but then after that I'mgoing to come back and see what
(01:13:26):
happened the last two or threeweeks after that I'll have some
zoo pictures, some wildlife parkpictures.
Oh, I'm looking forward to the,the swimming penguins, all the
good stuff yeah, me too allright, that's all I got.
Anything else you got for thegood order heather I have
nothing else.
Speaker 2 (01:13:42):
Thank you so much
well, thank you everybody.
Speaker 1 (01:13:44):
Uh, stay safe.
We'll see you soon and have agood uh, a good afternoon, good
night or good morning if you'rein australia.
Bye, bye, bye, thank you.