Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome everyone. Today, we're getting straight into the realities of
modern mobile security, and really it boils down to one
core kind of dangerous truth. The client side, it's basically
enemy territory. I'm George and I'm Sky.
Speaker 2 (00:17):
Hello. Yeah. Whether you're coding for iOS, Android, HarmonyOS or
cross platform with Flutter or React Native, the problems the same.
How do you actually secure sensitive data when you just
can't trust the app client. We've been looking at some
interesting materials using a simple weather app believe it or not,
as a sort of template for finding these really complex
(00:37):
architectural weaknesses.
Speaker 1 (00:39):
Yeah, and our goal here is to pull out some
genuinely actionable insights, not just theory, right, but stuff you
can use to prevent data breaches, API abuse that costs
real money and even system compromise exactly.
Speaker 2 (00:50):
We're focusing on the how the actual commands attackers use
and the real architectural fixes you need to stop them.
Speaker 1 (00:56):
Okay, so let's start with the first big one, the
mistake that's just well, it's the lowest hanging fruit out there.
Speaker 2 (01:00):
Exposed API keys YEP, often hard coded right into the
app package or maybe some client side JavaScript.
Speaker 1 (01:07):
And finding them is worryingly simple, isn't it?
Speaker 2 (01:10):
Oh, incredibly simple. Sometimes at first, attackers don't even need
complex tools. They can literally just use a curl the
two s to silently grab the HTML source of say
a web app or a target page.
Speaker 1 (01:22):
Just dof doos for silent right, yeah, no progress meter exactly.
Speaker 2 (01:25):
Then they just pipe that straight into repe and they're
looking for well common patterns, things that look like keys
being assigned, you know, alphanumeric strings maybe thirty two characters
or longer.
Speaker 1 (01:36):
Anything that screams credential.
Speaker 2 (01:38):
Precisely, stuff that looks like it's for open weather, Google Maps, Aws,
whatever service you're using client side.
Speaker 1 (01:44):
And if they find one hard coded, yeah, the damage is.
Speaker 2 (01:48):
Instant pretty much. They grab the key, make unauthorized calls.
Suddenly you're the one exceeding quotas getting hit with huge
bills from the third party service. It's a fast, painful
lesson in cost.
Speaker 1 (01:59):
Okay, but what if the key isn't just sitting there
in the source code.
Speaker 2 (02:02):
Well, that's where traffic interception comes in. That's the next
logical step. This is your classic man in the middle
or midim attack, often done using something like burp suite.
Speaker 1 (02:12):
Now hold on a lot of devs think, I'm using
HTTPS TLS encryption, my key is safe in transit. Why
doesn't that stop burpsuite?
Speaker 2 (02:20):
Ah, because the attacker controls the client device see TLS
protects against someone else listening in between you and the server,
like on the wider internet, right, But it doesn't stop
the owner of the device, who is the attacker in
this case, from inspecting their own traffic. They install their
own trusted root certificate on their phone or emulator.
Speaker 1 (02:39):
So they basically tell their device trust this eavesdropper exactly.
Speaker 2 (02:43):
Then they can figure their browser or maybe the app
itself if they can to route traffic through the BURP proxy,
and bang, they can decrypt their own HTTPS traffic one way.
TLS becomes kind of useless at that point.
Speaker 1 (02:55):
Wow, okay, So they just use the app normally search
for a city in our weather app example, YEP, perform
an action. BURP captures the request, and now the attacker
sees everything, the exact API endpoint, how the key is sent,
the parameters all laid out, which.
Speaker 2 (03:11):
Is gold dust for replicating attacks.
Speaker 1 (03:13):
Absolutely. The key takeaway is just this any static secret,
any credential embedded in the app code or visible in
the traffic from the app. It's fundamentally vulnerable reverse engineering interception.
It's going to get found. The client environment it's hostile.
Assume that.
Speaker 2 (03:28):
Okay, so they can see the raw requests they've moved
past just listening. Now they start probing, right, where does
that visibility lead them? Next?
Speaker 1 (03:36):
Straight into testing input validation. Even in our simple weather app,
that search box, that's prime real estate for attack. They'll
start hammering it looking for injection flaws.
Speaker 2 (03:45):
Let's get specific. If the back end developer hasn't been careful,
what kind of payloads are we talking about? Command injection,
no sequel injection? What do those actually look like?
Speaker 1 (03:54):
Okay for command injection, a classic test maybe sent with
curl could look.
Speaker 2 (03:58):
Like this city, London, cat, etc.
Speaker 1 (04:02):
Okay, unpack that for someone maybe not deep into shell scripting.
What's the malicious bit there? It's the semi coolon right exactly.
Speaker 2 (04:09):
That semi colon is an instruction. If the server side
code just blindly takes that input and passes it to
a system shell, which it absolutely shouldn't, bad practice, very
bad practice. The shell executes the first command fine weather
for London Fine, Then it sees the semicolon and executes
the next command cat etc. As redeem. If that works,
(04:30):
the attacker suddenly has a list of user accounts on
the server.
Speaker 1 (04:34):
And that's just the start. That's reconnaissance for deeper access.
It's kind of scary how a simple feature flock can
expose so much.
Speaker 2 (04:41):
It really is, and it's not just shell commands. If
the back ends using say Mango dB, they'll try no
seql injection. A payload might be something like city.
Speaker 1 (04:49):
Okay, no, that's Mango dB syntax right.
Speaker 2 (04:51):
It means not equal. They're trying to inject Mango DB's
own operator syntax directly into the query. The goal is
to manipulate the database lawe. Maybe bypass some security filter
or just dump unexpected data, so connecting.
Speaker 1 (05:04):
The dots a single exposed key or maybe a validation
flaw like this, it's often just the first step the.
Speaker 2 (05:12):
Foothold exactly the path often leads towards full cloud compromise.
That's lateral movement, and it happens when that initial stolen
key has way too many permissions. That's the critical architectural mistake.
Speaker 1 (05:23):
Over permissioning, granting more access than absolutely needed.
Speaker 2 (05:28):
That's the severity escalation. We really need to understand. The
failure is treating that app instance, that non human client
like it's a trusted privileged user. It isn't you need
minimal privilege always.
Speaker 1 (05:40):
So they get an awskey, maybe from the app, maybe
via mid am what's the playbook?
Speaker 2 (05:45):
Then okay, so they have the key. First, they'll set
it up in their own environment like export Osskiadasia, export
ossick or ses key standard stuff.
Speaker 1 (05:54):
Then they start poking around with the AWS command line
interface precisely.
Speaker 2 (05:58):
They immediately try to figure out what the key can do.
They'll run commands like oz im list attached user policies,
just enumerate the permissions, and.
Speaker 1 (06:05):
I think key was carelessly given broad access.
Speaker 2 (06:08):
That's the nightmare scenario. If it has enough permissions, like
say I am permissions itself, they can create persistent access.
They'll execute something like oz imcreate user, followed by oz
im attached user policy, maybe giving their new user administrator
access aimover.
Speaker 1 (06:25):
They basically own your AWS account pretty much.
Speaker 2 (06:28):
It's like, yeah, giving the intern the master keys to everything,
all starting from maybe one little mistake in the mobile
apps architecture.
Speaker 1 (06:34):
Which drives home why this idea of the client side
being enemy territory forces a shift. We have to move
away from static secrets on the client. So what are
the absolute must do architectural basics to keep secrets safe
and stop these escalations.
Speaker 2 (06:48):
Number one, without a doubt, server side proxying and using
environment variables correctly. API keys belong on the server period,
never ever on the client.
Speaker 1 (06:57):
So the contrast the bad way with the good way.
Bad way is the key in the client code.
Speaker 2 (07:01):
Right vulnerable the secure way. Let's take a no dogs
express back end example. You'd have something like constateekey process
dot E ANDV dot OpenWeather wrapp a key right there
in your server code.
Speaker 1 (07:11):
That process dot E and V that's pulling from the
server's environment variables, completely separate from the client app bundle.
Speaker 2 (07:19):
Exactly. The secret sauce is that environment variable kept securely
on the server. Now think about the flow. The mobile
app doesn't talk to OpenWeather directly anymore. It talks only
to your server endpoint. Your server gets the request, it
internally pulls the API key from its environment, makes the
call itself to open weather, gets the weather data.
Speaker 1 (07:40):
And then just sends the necessary data back to the
client app.
Speaker 2 (07:44):
Exactly. Only the sanitized data goes back. The key never
leaves the server, it's never transmitted to the user's device.
That totally shuts down the midim risk for the key
and the risk of it being found in the code
you've moved.
Speaker 1 (07:56):
The trust boundary makes sense. Okay, secrets managed, but what
about just getting flooded with requests? Even if the key
is safe on our server, A botaneck could just hammer
our new proxy indpoint right, run up our server costs,
or maybe still hit third party limits through us.
Speaker 2 (08:10):
Absolutely, that's where rate limiting comes in. It's crucial. You
need to prevent what some people jokingly call denial of
wage attacks.
Speaker 1 (08:17):
Huh, I like that because they cost you wages Basically, yeah.
Speaker 2 (08:22):
Either by exhausting those third party API quotas via your
proxy or just by overwhelming your own server resources with praffic.
Speaker 1 (08:30):
So how do you implement that? For devs using Express
for instance?
Speaker 2 (08:33):
There are great libraries like Express weight limit. It's pretty
straightforward to configure. You set a time window, say fifteen
minutes window MS fifteen sixty one thousand, and a maximum
number of requests allowed from a single IP in that
window maybe one hundred max.
Speaker 1 (08:48):
One hundred, and if someone goes over.
Speaker 2 (08:49):
That limit, the library automatically starts blocking subsequent requests from
that ip it sends back a four twenty nine too
many requests error. It's a really important layer for cost
control and basic availability defense. And you know, while we're
talking server side, we can't ignore the actual cloud infrastructure itself.
Developers have to regularly audit their cloud security posture. Misconfigured
S three buckets, for example, that's still a huge source
(09:12):
of data leagues.
Speaker 1 (09:12):
But checking bucket policies is key, using commands like OS
three PI git bucket policy.
Speaker 2 (09:17):
Yep, and even more fundamentally, checking things like EC two
security groups are they locked down? You should only be
allowing traffic on necessary ports typically eighty and four forty
three for web.
Speaker 1 (09:29):
Servers, and critically making sure there aren't default allow all
rules left active.
Speaker 2 (09:34):
Oh. Absolutely, that's a classic dangerous mistake. Default allow is
just asking for trouble. Always start with denial and explicitly
allow only what's needed.
Speaker 1 (09:43):
Okay, so we've got server side proxying, rate limiting, cloud hardening.
These are essential, But even with all that you mentioned,
the core problem remains the server can't really trust where
the traffic is coming from. Static secrets can still potentially
be pulled with enough effort communications boot right.
Speaker 2 (10:00):
Traditional methods have limitations because the client environment itself can
be compromised or simulated. We need something stronger. We need
a way to authenticate the app instance itself, proving it's
the genuine, untampered app.
Speaker 1 (10:11):
And that's where app attestation comes in.
Speaker 2 (10:13):
Exactly. This moves us towards a positive security model. It's
fundamentally different.
Speaker 1 (10:18):
How so how is it different from just say, embedding
a secret key or even using mutual TLS.
Speaker 2 (10:23):
Because it doesn't rely on any static secret embedded in
the app. Instead, it uses a dynamic challenge response cryptographic protocol.
Speaker 1 (10:31):
Okay, dynamic challenge response. Yeah.
Speaker 2 (10:34):
The attestation service effectively measures the integrity of the app
instance that's running right now. It checks if the code
has been tampered with, if it's running on a rooted
or jailbroken device in an emulator, if a debugger is attached,
anything suspicious.
Speaker 1 (10:47):
It's like a live health check of the app environment.
Speaker 2 (10:49):
Sort of, yeah, a cryptographic health check. If that check passes,
meaning the app seems legitimate and untampered, the app receives
a special token, usually a short lived Jason webs token.
Speaker 1 (11:00):
A JWT okay, a short lived token. What does the
app do with that?
Speaker 2 (11:04):
It includes that JWT in the header of its subsequent
API requests to your server, your server's gateway or back
end logic then just needs to validate that JWT. Is
it present, is it correctly signed by the attestation service,
has it expired?
Speaker 1 (11:20):
And if the JWT is missing or invalid or expired.
Speaker 2 (11:24):
Request rejected immediately. No valid token, no service. Because the
check relies on this live, dynamic measurement, not a static
secret that can be stolen. Traditional reverse engineering just doesn't
work against it.
Speaker 1 (11:36):
That sounds really powerful. But implementing that whole dynamic cryptographic
check that sounds complex. Is that something a typical dev
team builds themselves.
Speaker 2 (11:45):
Honestly, No, It generally requires a specialized service. The measurement
techniques need constant updating to detect new threats. The back
end validation needs to be highly secure. It's a specialized
defense mechanism.
Speaker 1 (11:57):
Okay, so you typically use a third party provider for this.
Speaker 2 (12:00):
Usually yes, but the security payoff is significant. It enables
something called runtime secrets.
Speaker 1 (12:06):
Runtime secrets meaning secrets delivered only when needed exactly.
Speaker 2 (12:10):
This is kind of the ultimate step you remove all
third party API keys like that open Weather key completely
from your app package.
Speaker 1 (12:18):
Gone, So where does the app get the key from?
Speaker 2 (12:21):
When the app passes at testation, the attestation service itself
can securely deliver the actual API key needed for the
next call just in time, directly to that validated, legitimate app. Instance,
the key exists only in memory briefly when needed. Key
exposure risk basically eliminated.
Speaker 1 (12:41):
Wow, okay, that really closes the loop.
Speaker 2 (12:43):
It does. Looking at this whole journey, even from just
analyzing a simple weather app, it really highlights a needed
shift in thinking. We have to shift security left, build
it in from the start, adopt zero trust principles where
server side validation and minimal privilege are the absolute defaults because.
Speaker 1 (12:58):
The core failure time and.
Speaker 2 (13:00):
Again, it's architectural. It's fundamentally about trusting the client environment
with things, secrets, logic that really belong on the server.
Speaker 1 (13:06):
And here's a final thought, maybe a bit provocative. As
AI starts writing more and more code code that might
look correct pass functional tests, there's a prediction or maybe
a concern, that we'll see a rise in what you
might call AI introduce security anti patterns code that works,
but is built on older insecure architectural assumptions like maybe
(13:30):
defaulting back to putting keys client side because it's functionally
simpler in the moment for the AI.
Speaker 2 (13:36):
That's a really interesting and slightly scary point. It means
developers can't just rely on functionality. They'll need to double
down on understanding these foundational security principles. Code reviews become
even more critical, specifically looking for these kinds of systemic
architectural flaws that an AI might inadvertently create.
Speaker 1 (13:54):
Food for thought.
Speaker 2 (13:54):
Definitely indeed well. This session was produced using human sources
and assisted with AI tech ichnology. Thanks for tuning in