All Episodes

March 6, 2025 66 mins

Send us a text

The boundary between tool-dependent analysis and true forensic expertise grows increasingly blurred as AI enters the digital forensics landscape. Alexis Brignoni and Heather Charpentier reunite after a month-long hiatus to sound the alarm on a concerning trend: the integration of generative AI into forensic tools without adequate safeguards for verification and validation.

Drawing from Stacey Eldridge's firsthand experience, they reveal how AI outputs can be dangerously inconsistent, potentially creating false positives (or missing critical evidence) while providing no reduction in examination time if proper verification procedures are followed. This presents investigators with a troubling choice: trust AI results and save time but risk severe legal and professional consequences, or verify everything and negate the promised efficiency benefits. The hosts warn that as AI becomes ubiquitous in forensic tools, it dramatically expands the attack surface for challenging evidence in court—especially when there's no traceability of AI prompts, responses, or error rates.

Beyond the AI discussion, the episode delivers practical insights for investigators, including an in-depth look at the Android gallery trash functionality. When users delete photos, these files remain in a dedicated trash directory for 30 days with their original paths and deletion timestamps fully preserved in the local DB database—a forensic goldmine for cases where suspects attempt to eliminate evidence shortly before investigators arrive. Other highlights include recent updates to the Unfurl tool for URL analysis, Parse SMS for recovering edited and unsent iOS messages, and Josh Hickman's research on Apple CarPlay forensics.

Whether you're investigating distracted driving cases, analyzing group calls on iOS, or simply trying to navigate the increasingly complex digital evidence landscape, this episode offers both cautionary wisdom and practical techniques to enhance your forensic capabilities. Join the conversation as we explore what it truly means to be a digital forensic expert in an age of increasing automation.

Ready to strengthen your digital investigation skills? Subscribe now for more insights from the front lines of digital forensics.


Notes:

Magnet Virtual Summit Presentations
https://www.magnetforensics.com/magnet-virtual-summit-2025-replays/
https://www.stark4n6.com/2025/03/magnet-virtual-summit-2025-ctf-android.html

parse_smsdb
https://www.linkedin.com/posts/alberthui_ios-16-allows-for-imessagesmsmmsrcs-message-activity-7279586088988413952-xHWl
https://github.com/h4x0r/parse_sms.db/tree/main

Are you a DF/IR Expert Witness or Just a Useful Pawn?
https://www.linkedin.com/posts/dfir-training_a-pawn-moves-where-its-told-a-dfir-expert-activity-7292981112463572992-c3wd/

Unfurl
https://dfir.blog/unfurl-parses-obfuscated-ip-addresses/
https://github.com/obsidianforensics/unfurl

AI to Summarize Chat Logs and Audio from Seized Mobile Phones
https://www.404media.co/cellebrite-is-using-ai-to-summarize-chat-logs-and-audio-from-seized-mobile-phones/

Ridin' With Apple CarPlay 2
https://thebinaryhick.blog/2025/02/19/ridin-with-apple-carplay-2/

Hello Who is on the Line?
https://metadataperspective.com/2025/02/05/hello-who-is-on-the-line/


Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:17):
Welcome to the Digital Forensics Now podcast.
Today is Thursday, March the6th 2025.
My name is Alexis Brignoni, akaBriggs, and I'm accompanied by
my co-host, my dear friend andmy empathic dear Heather
Charpentier.
The music is higher up and Iforgot the author of the song,

(00:40):
Shane Ivers.
Shane Ivers, thank you and canbe found at silvermansoundcom.
There we go, I got it.
So, when I had the script rightand the screen that I have my
second screen.
The resolution is not the best,it's an old, cheapo screen, so

(01:01):
I kind of tried to send it downand not online and I deleted the
line.
So oh geez anyways, we made it.

Speaker 2 (01:09):
We made it here, it's because we, it's because we
took a month off.
You forgot who our music was bythat?

Speaker 1 (01:14):
that is true.
That is true, I'm I'm old andsenile well, yeah you don't have
to, you don't have to commenton that.

Speaker 2 (01:22):
You can just let it go I'm just gonna go yeah, let
it go, let it go.
I'm just going to go yeah, letit go, let it go, anyways hi
everybody.

Speaker 1 (01:27):
We're so happy that you're here, the folks that are
coming in live, kevin, the manwith the master plan, the
forensic wizard, johan is hereagain, our right-hand person, hi
, johan.
Laurie is also here, andChristian doing all the good
stuff that he does with YouFadeand his participation in the

(01:48):
community and social media.
So hi to everybody.
So, heather, so what's going on?
What's going on?
We've been in a little bit ofhiatus, so what happened with
the hiatus.

Speaker 2 (01:57):
There's been so much stuff going on that I've felt so
busy and I know that you havetoo so we kind of just haven't
been able to have the podcastthe last few weeks, so we took
the month of February off.
Um.

Speaker 1 (02:10):
I mean kind of, I mean off of the podcast, but not
all the other stuff.
We have to do.

Speaker 2 (02:15):
It's been crazy, not off from everything else, but um
.
I mean, I, I've been just likereally busy at work.
We We've been doing some jobfairs at the state police, and
so I have to share a picturefrom the job fair yesterday at
UAlbany in.
Albany.
So this is a couple of ourdigital forensic or computer

(02:36):
forensic analysts at the lab andthey came over and helped
myself and my friend Kevin withthe job fair.
I made them pose for thepicture for all the social media
.
But tons of great students overthere asking all kinds of great
questions, so I'll take some ofthem and then throw some your
way for the.
Fbi as well.

Speaker 1 (02:59):
I don't think I can be picking anybody at this point
.

Speaker 2 (03:01):
Yeah, all right, maybe I'll just keep them all,
because there were some reallygood people stopping by the
point.
Yeah, all right, maybe I'lljust keep them all, cause there
were some really good peoplestopping by the booth.

Speaker 1 (03:08):
Yeah, that's, that's the for for now.
Yes, yeah, yeah, no.
Pretty cool picture.
You see there a little bit of aFriday box there.
Yeah, phone in and deal with itwithout signals, coming in,
it'll you fit touch and whatnot.

Speaker 2 (03:22):
So pretty cool yeah we had the New York State Police
swag too.
We were handing out to all thestudents.
So Kevin Pagano says whichKevin was it.
I'm sorry.
So I've got to tell the storybecause it's funny.
I don't even think.
I've told you so the other day.
Kevin sends me a screenshot ofhis LinkedIn and my family is

(03:46):
stalking him Literally the.
Linkedin says.
My mother looked at hisLinkedIn, my sister looked at
his LinkedIn and he's like thecharpentiers are stalking me and
so I write to them.
I'm like what are you doing?
Why are you guys viewingKevin's page?
And they're like well, we knowyou work with a couple of Kevins
and we didn't know if that wasone of the kevins.
So we clicked on it and so Ihad to explain to my family.
No, that's alex's kevin, andthen I have my two kevins and

(04:11):
I'd like to acquire a few morekevins.

Speaker 1 (04:14):
So yeah, I think, I think that I think actually is,
I'm kevin's alex, that's how itactually works.

Speaker 2 (04:21):
Okay, uh, but there's , there's my sister in the chat.
She says so many Kevins.

Speaker 1 (04:28):
It's Kevins all the way down.
That's how it works.

Speaker 2 (04:29):
It is.

Speaker 1 (04:30):
That's how the world runs.

Speaker 2 (04:32):
But I had to accuse my family of stalking over the
weekend.

Speaker 1 (04:37):
Well, you know that investigative streak goes, you
know it's a family thing, youknow.

Speaker 2 (04:40):
Yeah, definitely, definitely.
Lori wants a kevin too.

Speaker 1 (04:44):
I'll share my kevins with anybody who wants well, um,
I want to say real quick like d, for dan is here, jesse's also
here, jeremy's hanging out inthe chat, so it's always so good
to see you all here yeah um, soyeah, so uh so what else I?
Know.
No, no, before something elsehappened that people need to
know what happened, tell us whathappened.

Speaker 2 (05:06):
So I'm leaving work to go to a meeting and I'm
pulling around the campus whereI work and I'm on a one-way
street and there is a wrong-waydriver coming down the one-way
street.
I'm on and she just smacks me,so I'm dealing with car damage
this week as well, so you weregoing the wrong way.

Speaker 1 (05:26):
Is that what you're saying?

Speaker 2 (05:28):
No, but she did kindly tell the insurance
company that it was a two-waystreet and that I sideswiped her
.

Speaker 1 (05:37):
You know it's sad, but not unexpected.
That's how people operatesometimes in this world.
A lot of people operate thatway.

Speaker 2 (05:47):
Yeah, so one of the podcasts was postponed due to me
having to go take care of theimage to my vehicle.

Speaker 1 (05:52):
Yeah, I know, I know.
But you know, I mean, the goodthing is that right after you
got hit, right, the lady goes,you know, running away, oh yeah,
I forgot about that part.

Speaker 2 (06:03):
She left the scene of the accident.
Yeah, she didn't see the crimeshe left the scene and luckily
there was a local PD officer upahead that saw her going the
wrong way on this road, pulledher over and questioned her
about the fresh damage on hercar and her response was yeah, I
just hit something.

Speaker 1 (06:22):
Something.
I mean, I had no idea what itwas.
That was me, the humongous carthat I crashed, and now I'm
fleeing the scene of the crime.

Speaker 2 (06:30):
I don't know how that happened.
Can you come back?
I need your insuranceinformation, please, so yeah,
Lori said thanks.
Thank goodness you weren'tinjured.

Speaker 1 (06:40):
I wasn't injured, and neither was she, so that's
really all that matters yeah,although the crash looks worse,
I mean looks like kind of bad,you know yeah, I should have put
the picture up.
I don't have it right on me,but it's okay you know, nobody
wants to see your ferrari allcrashed up yes, my ferrari, my
hunt, my hyundai tucson I'm notsuper fancy well, I mean, and

(07:01):
you know that's, that's, it isfancy, don't be plain dumb.
Also, I want to echo Lori we'rehappy that you're okay.
That's the first thing I askyou.
Are you okay?
Yeah?
I know you're hardheaded, butnot that hardheaded, I'm making
sure you're okay.

Speaker 2 (07:17):
My sister is chiming in.
Come on, digital person, get acar cam.

Speaker 1 (07:22):
Well, she's calling you out now.

Speaker 2 (07:26):
Tell, I'm not an uber driver, sorry I'm not an uber
driver and I actually do have acar cam in the box in my back
room that I'm waiting for her toinstall for me yeah right, look
, shannon is saying that nowit's time to buy the ferrari.
See, yeah right silver liningsI'll trade her in Well on my end
.
Enough about me, what have youbeen up?

Speaker 1 (07:46):
to.
Yeah, on my end it's.
You know I don't got too muchto share it's.
As people might know, thefederal government is going
through a lot of changes, a lotof fast, really fast, and we're
all having to adjust.
You know it's, I don't know, Ijust just okay, look folks, just

(08:07):
just read the news and you'llknow what's happening and we're
and we're dealing with it.
So, uh, hopefully, you know,things will pan out uh positive,
positively for everybody, forfolks in federal government and
for the nation, and we hope,always hope, for the best.
That's all I have to say onthat.

Speaker 2 (08:24):
Agree, agree completely.

Speaker 1 (08:28):
All right, so what do we have?
So let's get into the meat ofthe situation here.
What's happening?

Speaker 2 (08:32):
Yeah, I mean we have all kinds of topics because
we've been tabling them forweeks.
So, yeah, they have accumulated.
I didn't know what to pick outof our group here, but so one of
the things that I wanted tomention is the Magnet Virtual
Summit that took place a coupleof weeks ago last week, week
before and they now have theirpresentations from the virtual

(08:57):
summit up on their website towatch the recordings.
So if you weren't able toattend while that was going on,
they're now up and available.
I just kind of wanted tohighlight a couple of the
segments that were on the MAGICthe MAGIC Magnet Virtual Summit
presentations.
So my boss, my lieutenant,actually did one of the

(09:19):
presentations and it was titledto cloud or not to Cloud that is
the question and he highlightedhow digital forensic labs
struggle with the decision ofwhether or not to take their
operations to the cloud orremain on premises.
So check out his presentation.
He did a great job.

Speaker 1 (09:43):
Who was that your boss?

Speaker 2 (09:44):
My boss, my lieutenant Brian Salmon.

Speaker 1 (09:46):
Fantastic job, the best presentation at that event,
right.
So we highly endorse that tomake sure that the boss
understands that everybody needsto go see that.
So thank you.

Speaker 2 (09:57):
You did your own presentation, but my boss's was
the best presentation.
Oh, absolutely.

Speaker 1 (10:03):
I even agree.
I agree with that, so let'smake sure that he knows that
that's the case.

Speaker 2 (10:08):
There we go.
Another one that I have notwatched yet and I cannot wait to
watch it, I just need to findthe time is Kim Bradley, from
Hexordia.
She did a segment on the cystdiagnose logs.
I really want to learn a lotmore about the cyst diagnose
logs and the different artifactsthat you can obtain out of out
of those logs, and I can't waitto watch her segment.
Um, and then, alex, you did asegment in spanish about the

(10:33):
role of the analyst beyond theuse of digital forensic tools.

Speaker 1 (10:37):
So, um, yeah, so I've been doing the minor virtual
summit, but the spanish version,I want to say for the last
three or four years, maybe four,I think four, but don't quote
me on it.
Yeah, it's always a good time.
There's a lot of the Utahfunctions community in Spanish
is also large, but there's notenough content.
I wish I had more time tocreate more content in Spanish.

(10:58):
It's just time instead ofpremium, obviously, but it was
really good.
So pretty much it was uh, um,the topic was, uh, bottom pusher
experts.
You know, kind of in quotations.
Right, we don't want to bebottom pusher experts, we want
to be just plain experts, and inspanish, the, the qualifier
goes at the end, right, so likeexpert, bottom pushers, right,

(11:21):
um, so we don't want that lastname of being the bottom pusher.
And and what does that take, um, to not be a bottom pusher?
And what does that take to notbe a bottom pusher, even though
we do have to press buttons?
Don't get me wrong.
Right, there's buttons thathave to be pressed, and it will
be the same buttons the bottompusher presses.
So there's no debate on that.
But there's more behind anexpert, when you do expert work,
and I think that's somethingthat is going to be even brought

(11:43):
more to the forefront with thewhole putting AI in all the data
forensics tools, and that'ssomething that we're going to
discuss a little bit later downthe road, I mean down the road,
down the episode road.

Speaker 2 (11:54):
Yeah, we'll do it today.

Speaker 1 (11:56):
Exactly this episode that we're on right now, correct
?

Speaker 2 (11:59):
Right.
So yeah, go check out theMagnet Virtual Summit
presentation.
And then Christian was just inthe comments too.
I mentioned the SysDiagnosepresentation.
They used his tool, Ufade,which we featured on the podcast
quite a few times, and it doesan amazing job with the
SysDiagnose logs.

Speaker 1 (12:17):
Yeah, and with the logical extraction.
So that's pretty good, andpeople should check Christian's
Ufade tool, so check it out.

Speaker 2 (12:26):
Definitely.
Another tool that I saw onLinkedIn more recently is called
Parse SMS.
So this tool, starting in iOS16, imessages you could edit
them, you could unsend them.
You could edit them, you couldunsend them and, as of today,

(13:11):
the author of this tool ismentioning that a lot of
forensic platforms don't reallydisplay that data intuitively,
so that you are missing in theSMS database.
It will show you unsentmessages.
If there are any unsentmessages in that database, it
will show you edited messagesand it shows you the original
message and then a date that themessage was edited and it will
then show you what the messagewas edited to.
Um, I have a couple ofscreenshots.

Speaker 1 (13:30):
I'm gonna just as you bring, as you bring them up,
it's folks that are more likebrand born, brand new into the
field.
Um, some of these messages,depending on the database and
the service, will get populatedin sequential order.
So you know message, knowmessage one ID one ID two ID
three sequential order.
So if you delete some let's sayyou deleted messages three to
five there'll be that gap therethat the tool will be able to

(13:51):
highlight for you, and so that'sthe reason for those gap, why
the gap identification isimportant.

Speaker 2 (13:59):
You can see here, in the output of that tool, exactly
what Alex is talking about.
We have an area here wherethere's a row gap, there's one
row missing, and this tool willpoint out the rows that are
missing.
Uh, if you have unsent messages, there'll be a little flag next
to the message or next to theum date and time that says the
message was unsent.

(14:19):
The message is no longer therewhen a message is unsent in the
SMS database, but but there is alittle flag.
And then let me just pull upthe other screenshot I have.
I did another screenshot here.
That first screenshot was fromthe author of the tool, but this
screenshot will highlight anedited message.
So you'll see, row 177 is aniMessage sent on December 18,

(14:45):
2022 at 1435, saying hey, hey,it's me.
And then at 1436, so like onesecond later it was edited to
hey, hey, it's me, regina.
Nice, very nice, so it has somenice output and it may give you
a better idea of what actuallyhappened with those messages.
I know when the edit messagevery first came out, some of the

(15:08):
forensic tools were showing theoriginal message as deleted in
the parse data and then the newmessage as intact, which I mean
I don't necessarily agree withthe whole deleted, it was just
edited, it was changed.
So going into that database andbeing able to find the date
edited on those messages andrealize, oh, the message wasn't

(15:30):
actually deleted, it was justedited to this.

Speaker 1 (15:33):
The most problematic word in data forensics is the
word deleted.
Yes, it is the second mostproblematic word is unallocated,
because people will take thosetwo words to mean either the
same thing or totally differentthings, or things that are just
like bonkers, right Right.
So we're not going to get intothat.
But whenever you use the wordsdeleted and unallocated, you got

(15:58):
to make sure you define theterm to the person that's
hearing you, so make sure thatwe all understand what we're
talking about.

Speaker 2 (16:04):
Yeah, be careful.

Speaker 1 (16:05):
Extremely careful.

Speaker 2 (16:06):
Yes, so I'll put the links for that up in the show
notes.
Afterwards.
There's a LinkedIn post fromAlbert Hoy, who is the person
who has the parse SMS tool, andthen I'll put the link to his
GitHub so everybody can downloadthat and try it out.

Speaker 1 (16:22):
Absolutely yeah.
I love it when folks and try itout, absolutely.
Yeah, I love it when folks putthose tools out.

Speaker 2 (16:28):
Yeah, oh, me too.
I love trying them out.
I have to go create test data,though.
I realize I don't have anydeleted messages on my phone
today, so I used some old datathat I had.
I knew I had deleted messages.

Speaker 1 (16:39):
Well, yeah, there's some test data you need to
create, don't forget you know.
Yeah, test data you need tocreate, don't forget you know.
Yeah, oh, I have the androidalmost ready for you.
Yeah, it's for me.
Oh, actually, I didn't tell, Ididn't tell the folks for us.
Well, for me and for us.
So yeah, so, um, let me tellthe folks real quick.
I'll be going to to amsterdamat the end of the month to
participate in the dx excelconference largest conference

(17:00):
conference in the netherlandswhere I'll be giving the keynote
and also teaching a couple ofsections on the leaps and how to
use them and how they work andstuff like that.
So that's going to be our trialrun no, I mean not trial run, I
mean it's going to be a run forthere and then we're going to
take that content also and useit at Techno, where we've been
selected you know, heather andmyself to teach the leaps up at

(17:24):
Techno, which I'm really excited.
Never been to Techno, alwayswanted to go, so it's fun.

Speaker 2 (17:32):
Yeah, it'll be my first time going to Techno, so
really excited about it.
Yeah, anybody that's going toTechno.
So, with the, the class we'redoing on the leaps is a hands-on
lab, and any hands-on lab atTechno you have to sign up for
ahead of time.
So if you're going to technoand you want to join us, make
sure you go on to the site and,uh, register for that ahead of
time.
It will probably fill upbecause I think there's like 30,

(17:52):
30 seats, um, so come, come andjoin us, it'll be a good time
absolutely have have had to giveher her signature.

Speaker 1 (18:00):
You know famous.

Speaker 2 (18:02):
Yeah, right, um.
So an article on LinkedIn thatI saw within the last month is
another great article from BrettShavers.
Um, I love his articles.
They always hit home and it'salways something that's
personally happening, um, in mylife, at work, whether it be, uh
, in the forensic world, and I'msure it's happening in many

(18:25):
other people's lives.
But this article is entitled Areyou a DFIR Expert Witness or
Just a Useful Pawn?
So Brett explores the role ofdigital forensic and incident
response professionals servingas expert witnesses in legal
proceedings.
He emphasizes the importance ofintegrity and objectivity,

(18:47):
cautioning experts against beingmanipulated into serving as
mere tools for one side.
He advises DFIR professionalsto maintain independence,
thoroughly validate theirfindings and be prepared for
rigorous cross-examination touphold the credibility of their
testimony.
So a couple of my favoritequotes from this article.

(19:12):
I'll let everybody read it, butas soon as you start looking
for ways to confirm a preferrednarrative instead of uncovering
the truth, you're not justfailing, you're corrupting the
field.
And my other favorite quotefrom it is the only thing that
makes forensic work bulletproofis process.
It's not your reputation, youryears of experience or how well
you can testify under pressure.
If your methods aren't solid,you're a liability.

(19:33):
And if your findings don't holdup, you deserve what happens in
cross-examination.

Speaker 1 (19:39):
Oh, absolutely.

Speaker 2 (19:41):
I love, love this article.

Speaker 1 (19:43):
I find it interesting , because when let's let's be
honest with ourselves here whenwe read something like that,
what's the first thing thatcomes to mind?
What's the first thing thatcomes to your mind?

Speaker 2 (19:54):
The first thing that comes to my mind.

Speaker 1 (19:56):
People shouldn't be doing this.
What people are these?
Oh be honest with yourself areus the first thing that comes to
most people.
If you're in law enforcement,you're thinking the defense, oh
the defense, the defense,they're higher guns.

Speaker 2 (20:10):
Let's be I got you all right.

Speaker 1 (20:12):
Is it true or not?
Yeah I mean the first thing youthink is well, defense.
Of course they're higher guns.
You know they hire to saywhatever right right, right um.
That's true, that's true weimmediately, we gravitate them.
Uh, no, actually, when you'repointing at them, there's all
these other fingers pointing atyou.

Speaker 2 (20:28):
Um yeah, I think I was going toward the pointing at
you, because I had just readthe article again and I'm like
oh, it's so easy to be not notto be talked into.
But for those attempts to bemade that you're being talked
into, I need the data, I needthe evidence to say this and I
need you to say this to provethat.

(20:50):
And we have to be very carefulwith that and make sure that
we're not just conceding to that.

Speaker 1 (20:57):
Well, I mean, and to your point right, when we think
this applies to others and wedon't put ourselves first,
that's a bias.
And I people say you shouldn'thave biases, and I mean it's a
personal thing.
I believe we'll always havebiases.
The question is, how do wemanage those?
Right?
Do we work based on beliefs oron principles, right?

(21:18):
And if the group and thisapplies, look, this needs to
apply first to us on the lawenforcement side.
And this applies Look, thisneeds to apply first to us on
the law enforcement side If theinvestigators, the prosecutor,
your supervisor, the sergeant,the captain all think and
believe this guy is guilty, well, I cannot operate on their
belief, even if I believe it too.
I cannot.
I get to operate on principleand the principle will guide me.

(21:38):
The bias will be there, but theprinciple will make sure to
check that and make it anon-issue there.
And it's us.
It's us, those quotes, it'sourselves Just hitting that
button, getting that output orcherry-picking.
Because, let's be honest, if youhave enough data, you can
cherry-pick the data and makethe data say whatever you want
to say.
And that's just a reality ofany field where you have enough

(22:03):
data.
And we don't want to be in thatsituation, either because we're
influenced by a personal biasof our belief and we shouldn't
have beliefs, we should haveprinciples or because an
external entity you feel thatthat's that a group pressure to
conform to the prevailing theoryof the case and that requires
that.
And again, for this year, Ihave three main things I'm

(22:25):
really focusing on is property,which is the more character as
you, as an individual, make sureyou do your due diligence, you
do all the work that you need todo and not cut those corners
and have proper attention todetail.
When you have those threethings, a lot of this, issues of
maybe becoming a pawn, eitherby others or willfully on your
own, it's not going to happen toyou.

(22:48):
Poverty, attention to detailand due diligence, those three
things.

Speaker 2 (22:54):
I think one of the biggest pushes that I get and I
know other examiners get in myoffice is when you have a
prosecutor and they want you tosay this is his phone, this is
his phone.
Well, we can't tell whose phoneit is.
We can tell you the useraccounts that are in it.
We can tell you what messageswere being sent, If messages

(23:15):
were being sent to a specificphone number.
Um, we may find a resume savedin the documents for the person,
we may find the driver'slicense photo in the images
section, but I can't tell youthat this is that defendant's
device.
I didn't see him with it in hishand.
And then I always use theexample.
I have test devices and my testdevices are Sheldon Cooper and

(23:38):
Amy Farrah Fowler and if youseize them from me, nobody can
go testify that those are AmyFarrah Fowler and Sheldon
Cooper's devices.
So definitely don't be a usefulpawn.

Speaker 1 (23:50):
Yeah, I hate the.
Can you say this game I don'tlike it?
I mean, yeah, you got theevidence, you got my report.
If you want to make someconclusion based on those,
that's fine.
My testimony will show.
It is what it is.
But this whole, look, if Icould say, was his phone, guess
what?
Guess what.
That would be already on thereport.

Speaker 2 (24:09):
It would be on the report If it's not on the report
it's because because I can'tsay that.

Speaker 1 (24:14):
I mean, come on, I mean you have the report, read
it, understand it, and andthat's where we go.
I mean I'm not going to go outof the report because that's
what it is.
You know, absolutely.
And and a forensic wizard saysthey never read the report.
Look, I have so many memesabout that.
It's like I give you the report.
Two weeks ahead we're gonnahave a meeting about the report
and I'm reading the report.
I mean I, I I'm gonna be like,I mean let me leave it like that

(24:38):
, right?
Um, people should read thereport.
Let's just leave it like I'mgonna say something I shouldn't
be be saying.
Let's carry on.

Speaker 2 (24:43):
Well, I am going to say you get to the table and
then you get asked the questionwhat do we got?

Speaker 1 (24:49):
Page three.
Let's read page three together.
Let's read it together.

Speaker 2 (24:53):
Definitely Check out Brett's article, though, and all
of his articles.
He posts them on his LinkedIn.
If you don't follow him onLinkedIn, you have to.
He's got some great stuff.

Speaker 1 (25:07):
Absolutely.

Speaker 2 (25:10):
Brett is the best.
So during our hiatus a newversion of Unfurl was released.
We've talked about Unfurl onthe podcast before.
I've shown a demo of how youcan take a TikTok URL and you
can put it into the unfurl tooland it'll show you timestamps

(25:30):
for that TikTok and kind of justbreak that URL down into a user
account, into the timestamp,into other information related
to that URL.
Well, the new version addsparsing of encoded and
obfuscated IP addresses.
It resolves blue ski handles totheir identifiers and looking

(25:51):
up their creation timestamps.
And it's blue sky.

Speaker 1 (25:54):
I know I'm like.
What service is that I neverheard?

Speaker 2 (25:58):
about it.

Speaker 1 (25:59):
Is it like a beer that comes from the sky?
I don't know what that is Likea blue ski?
I don't know, I don't know.

Speaker 2 (26:04):
Blue ski annoys Alex, so I have to say it every so
often, and I haven't been ableto say it for a month.

Speaker 1 (26:09):
Oh, no, I mean, look, it doesn't annoy me, you're the
one saying it, go ahead.

Speaker 2 (26:17):
Oh boy.
So let me I'm going to put wetried, we tried out the blue sky
in the new unfurl here.

Speaker 1 (26:24):
So as you're zooming in.
Laurie mentions correctly thatthe unfurl is a tool developed
by Ryan Benson, great examiner.
He works for, she works forGoogle and he keeps it up.
I like adding the, the, theblue sky, or like how that says
brewskis or whatever as theparsing for those URLs, which is
pretty neat.
So show us what.
What does it entail?

Speaker 2 (26:45):
So I use the Digital Forensics Now podcast Blue Sky
page and it breaks down theprofile, gives you the profile
name, gives you the handle, butthe thing that's new, that's
added here not new, but it looksup the creation timestamp so
it's showing here.
I created this Blue Sky accounton November 13th 2024.
And that can definitely be somebeneficial information for your

(27:09):
criminal investigations.
For sure, when something wascreated, we talked about those
TikToks before.
If there's an incriminatingTikTok that is put up onto the
TikTok platform, you can takethat URL, put it into this tool
and you can find out when thatincriminating TikTok was created
.

Speaker 1 (27:30):
So yeah, yeah, and this technique is really useful.
I have some carved out URLs insome cases and then you can
really look into the timestampsof whatever the search was
within that url and and get somecontext.
Because if you get the urloutside of the database that

(27:50):
keeps track of it in the browser, you might not have that
timestamp, but if it's a search,a google search, uh, url, you
might find the timestamps insideof it and encode it in a way
that it's not obvious to you asa human reader.
So Unfurl does that for you,does Blue Sky, does a whole
bunch of stuff.
So keep that in your toolboxand you might find it to be

(28:11):
really useful when you leastexpect it to be.

Speaker 2 (28:13):
Yeah, actually too.
So we recently got a newinvestigator in the lab and he,
prior to becoming investigator,didn't have a ton of digital
forensics experience, but hewatched the podcast and he
actually used the the episode ofthe podcast with the Tik TOK
creation date that we showedunfurl in one of his

(28:34):
investigations, when he was outon the road and was able to make
the arrest, he said.

Speaker 1 (28:37):
Oh, wow, yeah, Awesome.

Speaker 2 (28:39):
I know he got a gold star.
He's brand new.
And then he used the podcast,yeah.

Speaker 1 (28:45):
No that that, that that makes doing the podcast,
the podcast all the moreworthwhile, you know exactly,
Exactly.

Speaker 2 (28:53):
I asked him if he told that story in his interview
, because I mean he, we couldput him at like number one,
right.

Speaker 1 (28:58):
I didn't stand his interview.
That's awesome, that's thatthat warms my heart.

Speaker 2 (29:08):
Thank you, yeah, for that anecdote.
Um, so uh, I'm gonna let youtake this one away, but ai, uh,
summarizing your chat logs andaudio from seized mobile phones,
what do you think?

Speaker 1 (29:19):
yeah, so there's this article, um, I don't know if
you have you have the ur URLthat can show people.

Speaker 2 (29:25):
I do.

Speaker 1 (29:25):
Yeah, so let's put it on.
So 404 Media actually they'redoing pretty good reporting on,
I say, digital but cyber-ythings, really good reporting.
So they were mentioning and Iwant to make clear about this
they mentioned how Celebrite isadding AI to their tooling and
kind of going over theirmarketing materials and the

(29:49):
claims they do and some possibleissues with that.
Now I want to make clear foreverybody the article mentions
Celebrite, but the fact of thematter is there's at least three
or four tooling in themarketplace for digital
forensics that have AI included,and when I mean AI, I mean
generative AI, llms, because wehad AI for other things in the

(30:10):
past for image identification ofguns of money.
It's questionable how good orbad they are also to identify,
possibly, csam images.
So the technology exists whichand from my perspective, no
matter if it's LLM or not, it'spattern recognition.
That's pretty much, I think,kind of like a baseline.

(30:31):
You know, the keving at thebottom is pattern recognition.
If that makes sense, llmsthemselves will sort of figure
out what the next thing to saywithin the pattern of learned
things, and of course they don'tdo that.
It's an approximate calculation.
You cannot calculate it fullybecause there's so many options.

(30:52):
It would be impossible tocalculate them all, so it does
an approximation.
That's why it's a model, right,it's not the thing, but
something close to the thing.
Like you have a car and youhave a model car, it's the same
car but smaller.
Does that make sense?
So the point I'm saying this isbecause these type of
technologies are approximationsto the ultimate calculations

(31:13):
that cannot be made becausethey're impossible to be made.
So, with that being said, I'mnot picking on Celebrite, I'm
not picking on any particularvendor.
I will make some points that Ifound interesting about the
article, and the main point isthis All the promotional
materials around marketing, andeven us as users, right, always
tells us that the tools, theseAI tools, will find things that

(31:35):
you can't find on your own andthat they will do it faster.
Right, they will find what youcan find, and faster.
That's the big promotional pushfrom everybody.
Actually, people themselvesbelieve this to be the case,
right, and I want to show acomment from Stacey Eldridge.
Can you put that link?

Speaker 2 (31:53):
up.

Speaker 1 (31:53):
I will Yep Hold on one second so she's an excellent
examiner and she used to workwithin the Bureau before she
moved to the private sector, andshe says that in her experience
, ai is inconsistent.
Right, you can do a task fivetimes and the sixth one is all
crazy and that's to be expectedof these LLM technologies.

(32:15):
That's nothing weird, that'sjust how it is right.
You can look at an AI summary.
She said summary, and you findnothing.
You still have to go back.
Right, but what if they misssomething?
So you end up asking about thechats but then having to read
them and you find nothing.
And I mean or and I'm gonnaparaphrase her, just to not read
it straight up but if you dofind something, you still need

(32:37):
to go back and and verify that'sthere, right, and read it,
analyze, like she's saying right, what if the uh, if the uh ai
has some bias, right, how willthat influence the perspective?
Right, and I agree with her.
And I guess the point I'mmaking with this, with the
article, is that, in regards tothe marketing, when we talk
about, this is a personalopinion doesn't reflect, and, as

(32:58):
always, none of the things wesay in the show reflect the
opinions of our employers oranybody else.
They, they just reflect ouropinions and they're obviously
subject to change as we learnnew things.
Right, we operate on principles, not beliefs.
Now, that being said, the onlyway I think or opine that you
will get a speed boost by usingAI is if you make the habit of

(33:20):
trusting the AI, of trusting theoutput.
And the problem with trustingthe output is that this output
is a model, it's model-based.
It has to add some randomconcepts to it to make it human
sounding, and that's where thecreativity of the thing that's
needed to make it sound humanwill trip you up.
It is okay to have AI gen AImodify your paragraphs on a

(33:43):
whatever, on a speech you'regoing to give or a presentation.
That's fine.
But when we're talking aboutevidence, you can have that
creativity there, right, andthat will get you in trouble.
So what happens?
If you want the speed boost,you have to start trusting a
system, not verify it, and theproblem is that you cannot trust
a system that you cannotvalidate, and it's a big
difference.
People talk about verificationand validation.
They're, and it's a bigdifference.

(34:03):
People talk about verificationand validation that are the same
thing.
You need enough verificationpoints to validate a system, but
you will never be able tovalidate a generative AI system.
You can verify all the outputs,but if you verify the outputs,
you're missing that speed gainright.
So which one is it?
Is it faster or not?

(34:25):
I would argue it's going to takeyou at least twice as long Now
looking at all of the data andgoing to verify it after the ai
found it.
Well, no, absolutely, I haveyou 100.
And and I mean you have to bereally careful, because we don't
want to, we don't want tobecome, uh, a creator of
questions.
Does that make sense?
Like, well, my skill is tocreate uh questions and the I
tell me the answers.

(34:46):
Um, like, look, at some point Ineed to do the thinking, I need
to do the analysis.
I can just offload that.
I'm just gonna make questionsand the ai gives me answers.
Why, again, like stacy, sayingwhat if it doesn't get all you
need?
What is if it's not the actualthing?
Um, so, I guess you know, andsome of the points that the
article makes this is what Stacysaid, but the article makes

(35:08):
some good points.
For example, this is an exampleof imagine this investigator
and he's investigating peoplerobbing stuff from porches right
, like you know, like Amazon,packages that are being stolen
from porches, and they pull thisdata there and the AI finds a
pattern and then it becomes aninvestigation of a, you know,

(35:28):
transnational ring of criminalsstealing packages from porches.
Well, hey, I would have neverfound that out without the AI.
But the question is was yoursearch warrant allowing you to
do this Right?
Do this right?
Are you exceeding yourauthority in looking for
patterns that you were notallowed to look for, because you
came there for a specificpurpose, not to see if there's a

(35:52):
transnational ring of X or Y,right?
And those are questions thatare going to be played out down
the road.
I made a point in the threadingLinkedIn of saying something we
need to think about is addingdisclaimers.
The tooling is to add adisclaimer when output from AI
is being pushed out to a report.

(36:13):
There has to be sometraceability of what is what the
person does and what does theAI do, and if it's verified or
yeah, verified or not.
And I think we spend a lot oftime thinking about what's good
about it and not spending enoughtime thinking about what are
the consequences of using thistechnology within the parameters

(36:36):
that we have today, parametersthat don't take into account AI,
because that was not a thingthree years ago and AI at this
level right.

Speaker 2 (36:43):
Right.

Speaker 1 (36:46):
There is a lot of danger that we just be running
through, and I'm talking aboutall the development in the field
and also the users.
We think people think AI ismagic.
Whatever you want AI to be,that's what it is.
It's analysis, it gets unparseddata, it finds all the
solutions and, actually, of thethings that I said, ai does none

(37:08):
of those.
It does none of those.
It will not find things thatare unparsed for you.
It will not give you all thesolutions to a question,
depending on how it's trained,what that model is.
Imagine the article says thatit's trained only on convictions
.
The LLM is only trained to getconvictions.
Well, there might be a datapoint where that person is
innocent, but the data set, themodel, is looking for

(37:31):
convictions.
And again, if you have enoughdata, you can find patterns in
anything and it's going to finda conviction when there
shouldn't be one.
And you might think, oh, thatwill, I'll verify it, will you?
Will you verify it?
Are we sure about that?
Are you sure that you're not?
It's been right the last 10times.
It's surely right the 11th time, and I'm in a hurry.
I got another 40 cases, 50cases in the hopper that I need

(37:54):
to work with right.
The tool makers push it on theexaminer and the reality is that
the examiners being overloaded,they're going to.
Many of them are going toassume it's right because the
tools has it, because if it waswrong, why would they put on the
tool?
And that goes to the last pointI wanted to make on this
article reputational costs.

(38:16):
Do you want to be the vendorthat is brought out to court
because the AI said somenonsense and you're going to
stand?
And you're going to be in thestand and say, whoa, we told the
examiners to verify it.
You can give that disclaimer oncourt, but the repetition of
damage is going to be done.
Your tooling will be known forspewing nonsense and your
disclaimer is going to fall indeaf ears, right?

(38:38):
So I believe there should be amore robust way of delineating
what's AI, what's not AI of,either by policy or I don't know
some technical solution to makesure that the examiner attests
to the verification ofAI-produced material, right, and

(39:01):
again, first of all, because wewant cases to run properly,
like Brett's saying, the truthcome out, but also, as part of
the field and vendors, make surethat our tooling is providing
what it needs to provide and ourreputation will cost.
Again, that's not evensecondary, it's third theory.
It's like a whole level downthere in regards to justice.
But look, if you apply apublicly traded company, for
example, in this type of space,you need to worry about that.

(39:22):
If you want the enterprises tocontinue to a publicly traded
company, for example, in thistype of space, you need to worry
about that.
If you want the enterprises tocontinue to subsist, does that
make sense, heather?

Speaker 2 (39:30):
Yeah, according to this article too, like civil
liberties experts are alreadylatching on to the lack of
transparency and inaccuracies inthe AI generated results, so
it's going to be questioned.
It's definitely going to bequestioned.

Speaker 1 (39:45):
Well, I mean, and you put AI in the tool and now
becomes a really target richenvironment for folks on
whatever side of theinvestigative process is right,
because it could be a civildispute and one side is going to
latch onto that right.
The attack, the exposure right,the attack surface.
When you put AI on the tool, itgrows exponentially.

(40:08):
On what things?
What lawyers on whatever sidecould latch on to discredit the
work?
And so are we considering that?
Are we considering howexaminers will be exposed to
argumentation because the toolhas AI in it, right?
Do we have I said it beforesomething as simple as the

(40:28):
prompts?
Are the prompts being loggedwhen they happen?
What did they say?
What the amount of responsesthat it got out of the responses
, how many were correct, howmany were incorrect?
Do we have those statistics?
I believe we don't, and Iwouldn't be surprised that,
again, attack surface has beenbroadened.
Folks on the other side, withgood reason, will ask for that

(40:50):
right.
How can we validate thisprocess, right?
Well, you can't.
At a minimum, you have to haveat least a traceability at a
minimum, right, and I sound likea Luddite people that hate
technology, but we don't.
We love technology.

Speaker 2 (41:08):
Definitely.

Speaker 1 (41:10):
But there's some fields where we need to slow
down a little bit.
Right, this field is onemedical.
I don't want, I don't want AIgiving me medicines.

Speaker 2 (41:17):
I was just going to say that.
So you and I talk about AI sooften that my Facebook feed is
just filled with different AImodels for different things, and
most recently there's an AIspecifically for lawyers and an
AI specifically for the medicalfield.
And I do not want my doctorsdiagnosing me off of AI.

(41:38):
I see how he answers thequestions I ask.
If I, if I even ask chat GPT,uh, what does this artifact mean
?
They're always wrong, almostalways wrong.
I'll give it 90% wrong.
And a doctor diagnosingsomebody with that?
We're going to have malpracticelawsuits coming our way, not
our way, their way.

Speaker 1 (41:59):
The thing is about that I was asking.
I didn't remember where ourregistry key was in Windows that
I needed for a case and I'mlike let me Google it right.
But now Google on the top.
They don't put the first searchthing, they put the AI on the
top.
They do so you're forced toread that yes, and I'm like,
well, let me go at it.
And that registry doesn't exist.
It's that registry that doesnot exist and I and I'm like

(42:21):
what the heck?

Speaker 2 (42:22):
Those answers at the top of the page are wrong so
often.
I like to think that some ofthe maybe some of the tools I'm
using, though, aren't as bad asChatGPT, so hopefully we'll have
some better results.

Speaker 1 (42:35):
but wow, and people don't understand how that works.
You're going to start asentence and the next part of
that sentence, those tokens.
Right, it could be a token, itcould be a letter that's too
hard to calculate but of word orphrases, that, based on the
model, what should come next?
Right, and then a little bit ofrandom to make it sound human.
And it's not really thinkingwhat the right answer is, it's

(42:57):
just thinking based on my model,the probabilities of my model,
what would be the right thing tosay or the closest thing to say
, with a little bit ofrandomness to make it sound
human.
And that's scary for fieldswhere you can not have an
approximation to the truth.
You need the actual truth.
I need the actual medicine.
I was reading an article wherethey were doing transcriptions

(43:20):
from you know, the doctorrecords what you know.
Today I met patient AlexVignoni.
I'm going to give him 200milligrams of Pteradol, whatever
.
So that recording.
There's people that actually goand type it out to get these
prescriptions.
Well, they have an AI do it,and AI was making mistakes in
what the actual medication wasor what the dosage was.
People could die if they getthe wrong medication, the wrong

(43:42):
dosage, right, yeah, so I don'tyou know.
We have to be taking that intoaccount.
I think the Utah Francis field,and not only the vendors but
also us as users, temper ourexpectations.
Learn about AI to make sure weunderstand what the attack
surface is.
And since the tools are goingto have it, like it or not,
because vendors already put themin, then you have to think as

(44:02):
an examiner how will this newattack surface affect my
cross-examination?
When I'm being cross-examined,how will that affect how I write
my reports?
If I'm integrating AI knowledge, am I having the transparency
that's required through thediscovery by law, and can I
substantiate my verification inthe face of a tool that can

(44:23):
never be validated?
And those are things that we westart really think about
yesterday, not today, yesterday.

Speaker 2 (44:30):
Yeah, definitely, definitely AI.
I had to put up a couple ofcomments here.
Ronan says it's.
It is not me.
Ai wrote this message.
Hi, ronan, we know it's you.
Thanks for joining us.
And Forensic Wizard says itwould make people lazy.

Speaker 1 (44:50):
And definitely, it's just easy to push the button and
trust it I mean, I had a memethat says, uh, tooling enables
mediocrity, uh, convince meotherwise, or something like
that.
I mean, anyway, it's not, it's.
It's not.
It's not in cheek, um, but the,the, the automation is the idea
is to focus you on what youneed to look into, but the way

(45:13):
it's actually marketed is thisis the solution to your backlog
problem.
Yeah, and they and you're offto the races and we've got to be
careful.
I think we should go back to theunderstanding that any tooling,
ai or not, is to focus yourattention on things that you
need to take into considerationas you're doing your examination
, but they're not theexamination.
The tool output is not the exam.

(45:34):
The things that werehighlighted by the tool is not
the exam.
You and me know this to be true.
We look at a case and the mostimportant things of the case are
not parsed by the tool periodand me asking AI about it.
The AI will not know about iteither, because it's not parsed.
So I think we need to go backto those basics as examiners.
This will focus my attention,but it's not the thing.

(45:55):
It's not the.
The data is the data.
The reports are not the data.
The AI outputs are not the data.
The data is the data.
The reports are not the data.
The AI outputs are not the data.
The data is the data.
We need to go back to that.
Yes, all right, so we're goingto move on from the AI and folks
.
We said this a whole bunch oftimes.
But guess what, as long as AIis being pushed down our throats

(46:16):
, we're going to be sounding thealarm of we need to make sure
that we take this seriously anddo our responsibility, so you'll
hear this in many more podcasts.

Speaker 2 (46:24):
I'm sorry to tell you Yep.

Speaker 1 (46:26):
But until AI changes in some fundamental ways, we're
still going to be sounding thatwarning.
So it is what it is.

Speaker 2 (46:34):
Agreed New blog post from Josh Hickman.
The Binary Hick is his blog.
He recently wrote a blog postabout Apple CarPlay.
It is his second blog postabout this topic.
He revisited the forensicanalysis that he previously did
on Apple CarPlay.
The blog talks about how todistinguish between actions

(46:56):
performed via CarPlay versusdirectly on an iOS.
One of the artifacts this isjust one of the artifacts he
talks about.
One of the artifacts this isjust one of the artifacts he
talks about are the unified logsand the wired and wireless
CarPlay connections that you canfind in the unified logs.
He also outlines numerous otherartifact sources that contain

(47:16):
data related to the AppleCarPlay.
This type of data, I would say,can definitely be used in your
distracted driving cases.
Was the driver on the phone?
Were they manipulating thephone or were they connected to
CarPlay and maybe going wireless?
Might need to know that in acase like that.
So his blog would definitely behelpful for anybody that is

(47:38):
investigating a case that hasthose charges.

Speaker 1 (47:43):
Oh yeah, there's a pretty common cases that require
that, so please check that out.

Speaker 2 (47:50):
Ah, let's see here.
I'm reading comments andswitching in here, so hello, who
is on the line is another blogpost that recently came out by
Metadata Perspective.
It looks into forensic analysisof iOS devices to identify
participants in group callsgroup calls I'm sure everybody

(48:16):
here who does digital forensicshas seen that maybe their
forensic tools don't necessarilyparse all of the participants
in a group call.
I think they've gotten better,but you used to not get all of
the participants in group calls.
So this investigates the callhistory store data database and
focuses on the call record, theZ handle and the remote
participant handles tables.
Talks about how examining thesetables and employing specific

(48:40):
SQLite queries can help youuncover the details of a group
call and its participants.

Speaker 1 (48:47):
And this is something they need to take into account,
not only for this type of dataright, but for any chatting
application, and I mentionedthis before.
Some co-workers from anotherdivision found that when they
look at a particular applicationchatting application and they
look at the users that wereparticipating in that group chat
, the database had a field thatexplained not explained but

(49:10):
showed if the person that'sparticipating was an
administrator of the channel ornot.
And that became really importantbecause there's some federal
law that states that if you'redoing certain crimes online, if
you're the administrator of thegroup that's making those crimes
, you get a higher penalty forbeing a leader, right?

(49:31):
And how do I prove that?
Well, by showing you're anadministrator, because you have
the power to, you know, addpeople to the crime group, take
them out.
You have the power to even tostop it as an administrator, but
you don't, right.
So there's a lot ofresponsibility for
administrators, and the toolingat the time did not show that
data.
It just showed the users andthe conversation, but it didn't
show if they were administratorsor not.

(49:53):
And the only way to find out isby you, the examiner, to
looking at the source data andsee what it says, what it shows.

Speaker 2 (50:01):
And utilize the blogs that are out there.
This blog will show you thequeries that you need to pull
that data out of the databaseand also tells you what tables
you need to look at.
Don't always have to go andrecreate the test data yourself.
It may be out there alreadyabsolutely all right, so we're
trying a new thing.

(50:21):
Um, we have the what's up withthe leaps, we have the meme of
the week and now we're gonnahave an artifact of the week.
Um, yeah, so we'll do that as anew addition and we'll highlight
an artifact that may be helpfulin people's cases.
So this week specifically, I'mgoing to show everybody kind of

(50:43):
like the lifespan of a file thatgets deleted from the gallery
on an Android device.
So let me share my screen here.
So the gallery trash on anAndroid device, Starting in
Android version 9, if an imageor video is deleted, the file is
renamed and sent to thefollowing path Data media, zero,

(51:06):
Android data, the ComSecAndroid gallery, 3D Files, and
then trash.
So that file will remain in thetrash for 30 days.
During those 30 days the usercan go into the trash and
permanently delete the file orrestore the file to wherever its

(51:27):
previous path was, which couldbe the DCIM, the camera folder.
But if it's not restored withinthose 30 days, it'll
automatically be deleted fromthat trash.
So I have up on the screen justsome screenshots that show what
the trash on my Samsung devicelooks like, and then if I go

(51:47):
into that trash bin, I have somefiles in there and at the
bottom of each file it tells howmany days left that file has in
the Samsung's trash bin.

Speaker 1 (52:00):
I think you mentioned that with Pixel's phones is the
same thing.

Speaker 2 (52:03):
Yeah, the screenshot I show at the end will actually
be from my Pixel.
Excellent, this is just acouple of screenshots from
parsed files.
I used Celebrite for this.
A couple of screenshots fromparsed files.
I used Celebrite for this, andthese are parsed files that are
in that trash directory.
You can see the file name.
That's what it gets renamedonce it's in that trash

(52:23):
directory.
So if you're looking at parseddata, you're going to see that
new file name.
You're going to see createddates, last access time, which
are usually related to thattrash bin, not necessarily when
the file was created, and then,if you're lucky, you'll have
some exit data there as well.

Speaker 1 (52:42):
And that file name seems to be like a big integer
with a minus sign on the front.
So people that are listening,they have an idea of what the
file name entails it's a biginteger number, really large,
with a minus, kind of like aminus sign on the front.

Speaker 2 (52:55):
Some of them have the minus and some of them do not
have the minus, and I haven'tfigured out exactly why yet.
But we'll work on it and when Ido we'll share.

Speaker 1 (53:03):
Absolutely.

Speaker 2 (53:05):
Let me just I'm going to remove this from the screen
and share a screenshot that Ihave from my pixel.
So the screenshot I have on thescreen now shows the database
that you'll find informationabout the files that reside in
that trash.
You will find it's called localDB.
Let me zoom in so everybodywho's watching can see.

(53:28):
So the database is local DB.
Oops, hold on, I'm on the wrongscreen.
Zooming in, that doesn't help.
There we go.
So local DB.
And if you look at the tablesin the local DB database you see
the trash.
I have four files in my trash onmy pixel device.
In the ABS path column you'llsee the path that they're

(53:52):
currently residing in, which isthe trash, and that long integer
number that the file has beenrenamed to when it was deleted.
Scrolling over just a littlebit, there's origin path, which
will give you the original pathand file name related to the
file that has been deleted.
So all four of my images wereoriginally in the DCIM camera

(54:16):
folder and they were named2025-02-27, like the normal
naming convention that you'llsee for the Samsung device.
So all four of these imageswere taken with the Samsung
device.
I know that because I took thepictures.
And then another column ofinterest is the delete time.

(54:36):
So all of these images weredeleted on 227 at 206.56, and
you can find that right in thedelete time column.
Further over there's a columncalled restore extras and I have
that restore extras up on theend column here and it has
additional information about thefile.

(54:58):
So this is an image file.
If it had been a video file,the duration is in here, the
original path is in here, butthe date taken is also in this
restore extras column.
The date taken can be broughtover to a tool like decode to be
decoded, and you'll see thatthis picture in particular was

(55:19):
taken on 2-27 at 2-06.
And if I come back up, it wasdeleted at 2-06-56.
So just seconds from when thepicture was taken it was deleted
and sent to that trash where itwill remain for the 30 days
unless I restore it or let that30 days expire and it removes

(55:41):
permanently.

Speaker 1 (55:43):
That's, that's fantastic.
You know, can you imagine?
It's like this picture wastaken whatever weeks ago and
then it got deleted, and rightbefore, right right after we
knock on the door.

Speaker 2 (55:53):
Yeah, so I've had that happen, it's not.
There's a couple of cases I'vehad that happen and and you
won't know this you won't knowthat they were all deleted the
morning that the officers wereknocking on the door.
You won't know what itsoriginal name was or where its
original path was, or if it waspotentially taken with the
camera, if it's in the DCIMcamera, unless you go into the

(56:16):
database and do some manualanalysis yourself, unless you
use ALEAP.
Because as soon as I found this, I called Alex.
I didn't know how to do theALEAP, ileap stuff.
It was quite a few years agoand I said can you support this

(56:37):
in Elite please?

Speaker 1 (56:37):
so I'll have a pretty report for my case.
And he did.
Yeah, we try to please.

Speaker 2 (56:39):
We aim to please.
Ah, I've got messages in herethat say we can't see it.
Did everybody see thescreenshot?

Speaker 1 (56:45):
I don't know.
I mean, I'm seeing it.
Let me, let me go to the uh,let me go to the uh live channel
, let me see if it's what's thedeal.

Speaker 2 (56:52):
My sister wrote Damn you AI.

Speaker 1 (56:56):
Don't change it.
Oh, that's true, it's notshowing.
Oh no, it's just a blank space.
I went to the channel and see,just to see.

Speaker 2 (57:05):
Okay.

Speaker 1 (57:06):
Yeah, let me see what the deal is.

Speaker 2 (57:08):
I'm going to remove it from the screen and re-add it
.
No, no, no.

Speaker 1 (57:10):
Leave it up there, leave it up.

Speaker 2 (57:13):
No, no, no, no, leave it up there, leave it up.

Speaker 1 (57:14):
All right, I leave it there, and what else?

Speaker 2 (57:16):
I'm totally going to share it out on the on the on
the podcast page.
I apologize, I've justexplained it all.
Well, it's like you'relistening to the audio podcast
without the YouTube version,with the video right.

Speaker 1 (57:30):
Yeah, no, for sure, for real.
I'm looking, I'm spying onYouTube and it's like it's a
blank space.
That's just so odd.
So uh, yeah, you know, I mean.

Speaker 2 (57:41):
I will share the screenshot.
So I took, um, I took and drewboxes around, uh each of the
columns that I just mentionedand I uh put depth, uh not
definitions, but uh explanationsof what the data is in each of
those columns.
So I will share that out witheverybody.

Speaker 1 (57:58):
Yeah, we will.
I'm sorry.
I'm just kind of testing out ifI make it full screen, if it
shows, but I don't think that'sgoing to happen.
Yeah, I made it full screen.
I can see it.
Can people confirm they'reseeing it when I put it full
screen, because I think I'mseeing it.

Speaker 2 (58:10):
Holly, can you see that?

Speaker 1 (58:18):
I know my sister's listening, yeah yeah, they're
seeing it, you know what itshows.
Now, yeah, I make it fullscreen.
It seems that that's thatsizing.
It makes it messes it up, but Imean, let's, can you just like,
like quickly, kind of, sincenow they can see it, can you
quickly talk about the colors,like this box has this oh yeah,
colors.
And no, don't zoom it in, justleave it there, at least they
can see it.

Speaker 2 (58:34):
So so the green box is in the column labeled ABS
path and that contains the paththat the file is currently in in
the trash with that longinteger name.
The red box is the originalpath and file name related to
that trashed file.
Name related to that trashedfile so it will be for the for

(58:57):
my files is DCIM camera and thenthe original file name of the
images that I took with thisdevice's camera.
The blue box is the delete time.
All four of these images weredeleted at the same date and
time, so I have the same dateand time for all four.
And then over on the left handside is that data that I said
was in the column called RestoreExtras and it includes the date

(59:19):
taken which I have in a purplebox, and down on the bottom is
another purple box in the decodetool showing the conversion of
that timestamp.

Speaker 1 (59:33):
Excellent.
So at least people got a, theones that are live get a quick
or watching the recording latercan see what the database kind
of looks like and the decodingprocess to get that timestamp so
pretty good stuff.

Speaker 2 (59:43):
I just went on gabbing and didn't even check
the comments.

Speaker 1 (59:47):
I don't know.
It happens and again, there's alot of changes on our system
that we use for for the podcast.
So that might be one of theissues.
We had to email the developersand say, hey, this might be
happening, and then they can fixit.
I'm going to take it out of,I'm going to remove it, okay,
and then whenever we show the Itook ourselves out, whenever we

(01:00:08):
show the meme of the week, thenoh, hold on, I don't like this,
hold on, I like to be on thenext.
Ah, that's better.
Yeah, see, I get antsy, allright.
So whenever we show the meme ofthe week, let's make it full
screen, so hopefully that willshow.
Oh, yeah, definitely.

Speaker 2 (01:00:24):
And I'm going to share out that picture for the
deleted gallery.
I'm going to put it right on myLinkedIn and on the podcast
LinkedIn so everybody can seethat.

Speaker 1 (01:00:31):
Excellent.

Speaker 2 (01:00:33):
All right.
So we are to what's new withthe leaps.
I know that there is a ton ofstuff new with the leaps.
There's all kinds of stuffgoing on in the background by a
bunch of amazing people.
I'm just going to highlight oneupdate to iLeap this week.
So there was an update to thehealth parsers, kevin Pagano,

(01:00:53):
who is in the chat.
He added location type tolocation type for activity, so
indoor and outdoor.
For the health parsers on iOS,he had a new parser for source
devices, which includes makemodel software and other other
devices and other devices.
So if you're looking at thehealth parsers in iOS, you can

(01:01:17):
go in now and see if activitieswere indoor versus outdoor as
logged in that health database.

Speaker 1 (01:01:23):
Well, and that's no, no, and that two things.
The first is that reallystresses the point that you
might be used to or really knowreally well a data source.
Oh, I've done this data source20 times and it's always you
know.
Depending on the case, give ita do-over real quick because you
can find new stuff in olddatabases.
There's nothing that prohibitsthe vendor that's making the

(01:01:44):
tool or the product, I shouldsay, to add a new table with
useful stuff.
And Kevin is just finding morestuff on a database that we
thought we knew all about, butthere's always something new
that could come out.
So that's pretty good.
I want to really give a quickshout out to Johan Polacek.
He's the best.
He's been doing a lot of workunderneath the hood of the leaps
, specifically to make it forLava and those that are not

(01:02:17):
aware.
It's a new system that we'redeveloping to be able to deal
with large data sets and be ableto present it to you, no matter
how large that data set is.
So because currently the Leaptooling is HTML-based reporting
and HTML-based reporting breaksreally easily if it deals with a
large data set like a largereport.
So we're working on that.
Johan is actually working on themedia section to show media
within Lava from the Leap side,and he's doing an amazing job of

(01:02:39):
that.
I'm trying to get a hold of mywork situation and wait till
things settle down a little bitand then I can hopefully then
add to that effort migratingartifacts to the new Lava system
and helping with other generalcoding responsibilities.
So I know I've been a littlebit out of action, but the hope
is that soon enough we can startworking on that.

(01:03:02):
So, again, thanks to Kevin.

Speaker 2 (01:03:03):
Kevin just even saying that there's more Apple
health coming up, so that'spretty exciting yeah definitely,
and we are at the end of theshow to the meme of the week, so
let me share my screen let meconfirm that we can see.

Speaker 1 (01:03:23):
Let me spy on the yeah, can we?
Let me spy, so let me see, letme see.
Okay, we can see it so it wasjust that one screenshot.
We can see it, I can see it, soit was just that one screenshot
.
Hmm, weird, yeah, I can see it,I can see it.

Speaker 2 (01:03:39):
The meme of the week goes along with our AI chat.
This week we have an iceberg onthe meme and above water, what
the AI finds, and then belowwater, what only you can find.

Speaker 1 (01:03:52):
Well, and the topic with icebergs is that.
You know, it's the tip of theiceberg, right?
Yes, it is Usually the tip isthe smallest part of the iceberg
.
What's underneath is what cansink the Titanic.
Right, right, exactly.
And that's me trying to give avisual exhortation to examiners
to be aware that, yes, we havebacklogs, yes, we have a lot of

(01:04:14):
cases, yes, we're overworkedRight now, budgets are being cut
, people are being fired and wehave to do more with less and
less and less, and I get that.
But that just means that we needto be more clear-eyed in what's
important.
And how do we get there and notthink that, if we offload the
responsibility to a system, thatthat will take care of the

(01:04:36):
issue?
Most likely it will not.
So we have to think about andit could be a topic for a full
episode how can we be moreefficient, how can we speed up
our verification, validationprocedures and how do we do a
workflow that maximizes theidentification of key items in a
case in order to be able to beas efficient as we can with what

(01:04:56):
we have?
But again, it's just a way forme to really motivate people to
or at least have that intoaccount.

Speaker 2 (01:05:05):
Absolutely All right.
That's all I've got, that's allwe got.

Speaker 1 (01:05:11):
Yay, I mean.
Again, thank you everybody forfor sticking with with us.
Um, again, there's a lot ofthings going on around the world
things going on at work, thingsgoing on in our personal lives,
even car crashes going onthat's enough of that, yeah
please no more.
Um.
So again, life is life and andwe all live it together.
So we appreciate yourunderstanding and we'll try to

(01:05:34):
be as consistent as we can andhopefully, you know, things get
better for everybody and we'llkeep trucking here and giving
you the latest on data forensicsand our opinions on those
things.
Yeah, anything else for thegood of the order, heather.

Speaker 2 (01:05:46):
That's it.
That's all I've got.

Speaker 1 (01:05:49):
All right, well, everybody, we'll see you all
soonish again.
Thank you, everybody.

Speaker 2 (01:06:02):
Those are live the ones that are watching.

Speaker 1 (01:06:03):
Thank you for that, those hearts, we love you as
well and I will see you on theon the next episode.

(01:06:30):
Yeah, thank you.
Have a good night bye, thankyou.

(01:06:51):
Outro Music.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.