Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to Upwardly Mobile API and App Security. We're the
show dedicated to helping you mobile app developers and security
pros stay ahead in this constantly shifting world of digital threats.
We sift through the noise, find the critical insights, and
get you informed fast. Today we're tackling the really high
stakes area of mobile app development and API security. I mean,
(00:22):
it feels like new threats pop up every day, right,
and we're going to explore why the built in protections
you get from platforms often well they just don't cut it,
and what advanced techniques are really essential now to protect
sensitive data. I'm George and I'm glad Sky's here to
walk us through this. It's great to be here, George. Yeah.
Speaker 2 (00:38):
We'll be looking at things like a proposed update to
the hip of Security rule guidance from ALLAPS Mobile Project,
some deep dives into code obfuscations specifically using LVM, and
a pretty detailed look at iOS app security, particularly apples
let's say, unique position on it and where third party
solutions fit in. It feels like a really critical conversation
for anyone building or securing the apps, wouldn't you say,
(01:01):
especially if you're working in regulated industries, whether that's iOS,
Android or.
Speaker 1 (01:05):
HarmonyOS absolutely essential. Okay, so let's start breaking this down.
The first point that really jumps out is how uniquely
exposed mobile app code is. It's not like server code,
is it not at all.
Speaker 3 (01:19):
Once that app is out there, deployed on a device,
its binary can potentially be decompiled, taken apart, and analyzed
by well anyone who gets their hands on it.
Speaker 1 (01:29):
And this is where the distinction between device security and
app security gets It is interesting way exactly, and it's
often misunderstood. Take Apple for example, they genuinely have world
class security architecture for the device itself, you know, the
hardware root of trust, secure enclave, data protection, the app sandbox.
These things are incredibly good at protecting data when it's
just sitting there at rest, and shielding the operating system
(01:52):
from say, rogue apps. They build this really strong perimeter
around the device. Okay, so Apple builds this fortress, but
you're saying that doesn't automatically protect the app's own logic
like the sect sauce inside.
Speaker 3 (02:07):
Precisely, that's the whole man at the end attack scenario.
As soon as a user launches your app, the binary
gets decrypted, it's loaded into memory, and at that moment,
a skilled attacker who remembers the legitimate owner of the
device can fire up debugging tools, instrumentation frameworks, and basically
watch your app run, poke at its code, see how
(02:27):
it behaves in real time. And this creates a huge
security gap, especially if you've got you know, proprietary algorithms,
core business logic, maybe even cryptographic keys embedded right there
in the app.
Speaker 1 (02:39):
It's like having a super secure bank vault, but leaving
the combination written on a note inside once the main
door is open.
Speaker 3 (02:44):
That's a pretty good analogy. Actually, yeah, so.
Speaker 1 (02:46):
This must create some headaches for developers, especially in those
regulated fields where the standards are so high.
Speaker 3 (02:51):
Oh. Absolutely, there's this sort of security catch twenty two.
On one hand, compliance standards like for Hypa or finance
mandate that you must protect against reverse engineering and tampering.
But then the very techniques you might use, maybe really
aggressive code obfuscation, could potentially get your app flagged or
even rejected by the app store.
Speaker 1 (03:13):
Why because apples tools might mistake it for malicious code
trying to hide something Exactly.
Speaker 3 (03:19):
They're worried about apps trying to sneak things past the
review process. So developers are walking this typrote.
Speaker 1 (03:24):
Okay, so the app code itself is vulnerable on the device.
Are attackers stopping there or is the threat landscape wider?
Speaker 3 (03:31):
Oh, it's much wider. Think about the explosion in mobile
healthcare apps. You've got patients doctors using them on personal phones,
often way outside any secure hospital network. And these apps
are increasingly connecting to APIs that give access to sensitive
patient data EPHI. So you've got this really rich ecosystem
that's basically.
Speaker 1 (03:48):
A prime target for attackers coming from multiple angles.
Speaker 3 (03:51):
You got it. And the traditional security methods, things like
just scanning for known malware signatures or simple checks to
see if the app's been tank bird with, they often
don't hold up. Even something as fundamental as TLS encryption
can be undermined if an attacker manages to mess with
the trusted certificates on a compromised device, and that's definitely possible.
Speaker 1 (04:12):
So what kind of specific attacks are we seeing developers
really struggle with?
Speaker 3 (04:16):
Well, several key types. First, clone or modified apps. Someone
takes your legitimate app, injects malicious code, repackages it, and
tricks users into downloading it. Then there's runtime manipulation. If
a device is jail broken or rooted, attackers can use
powerful tools things like Freedom or Mitten proxy.
Speaker 1 (04:37):
Those let them hook into the running app. Yeah, injet code,
watch network dravig.
Speaker 3 (04:41):
Precisely, they can basically hijack the apps operations while it's running.
Then you've got man in the middle attacks trying to
intercept that communication even if it's encrypted. But a really
big one that keeps coming up is the exposure of
API secrets, keys, tokens, credentials. One analysis mentioned that a
staggering seventy nine percent of healthcare organization had dealt with
an API related security incident in the last year.
Speaker 1 (05:03):
Seventy nine percent.
Speaker 3 (05:05):
Yeah, and often it boiled down to API keys just
being too easy to extract from the mobile app's code.
Speaker 1 (05:10):
That is Yeah, that's a huge number. It really paints
a picture the scale of this problem. So, given how
clever these attacks are, what actually works against them? Are
there genuinely effective defenses emerging?
Speaker 3 (05:23):
Yes, definitely, And that's where the conversation is shifting, especially
with things like potential IPA updates looming. The focus is
moving towards more specific, more advanced mobile app protections that
go beyond the basics the platforms provide.
Speaker 1 (05:36):
Okay, let's dig into those. What are some of these
advanced techniques developers should be seriously considering?
Speaker 3 (05:42):
All right? First up, app attestation. This is a really
powerful technique. Its job is to prevent your app from
being copied, cloned, or modified at runtime. It basically ensures
that only genuine, untampered versions of your app are allowed
to talk to your back end APIs.
Speaker 1 (05:58):
So it's not just authenticating the user anymore, is authenticating
the app instance itself exactly?
Speaker 3 (06:02):
It's a fundamental shift. Is this app running on this
device right now? The real deal? That's the question it answers.
It's crucial for stopping those cloned apps we mentioned.
Speaker 1 (06:13):
Makes sense?
Speaker 3 (06:14):
What else building on that? You have? Run time device attestation.
This goes further than just a one time check. It
involves continuously scanning the device environment. Is there suspicious software running?
Is the device jailbroken or rooted? It provides real time
info about the device's health and can block requests if
things look bad.
Speaker 1 (06:33):
Okay, so at testing the app a testing the device environment?
What about the data moving back and forth? You mentioned
TLS can sometimes be bypassed.
Speaker 3 (06:41):
Right. That's where dynamic certificate penning comes into play. Instead
of just relying on the hundreds of certificates trusted by
the device's operating system, the app is coded to only
trust a very specific, limited set of certificates that you
control for your back end.
Speaker 1 (06:55):
So it drastically narrows the window for man in the
middle attacks.
Speaker 3 (06:59):
Yes, and the dynamic part is important. It includes a
secure way to update those pin certificates within the app
itself without needing a full app store update, So if
a certificate is compromised, you can react immediately across your
entire user base.
Speaker 1 (07:12):
Got it. That seems much more robust. Now, what about
those API keys, the ones that's seventy nine percent statistic highlighted?
Is there a better way than just hiding them in
the code?
Speaker 3 (07:23):
Absolutely? The consensus is clear. API keys needed for sensitive
data access should never be hard coded or stored directly
in the mobile.
Speaker 1 (07:31):
App period, So how do they get them?
Speaker 3 (07:33):
They should be delivered securely, only when needed, and only
to app instances that have successfully passed attestation checks. This
means you need a secure back end system to manage
and distribute these keys, plus mechanisms to rotate them instantly
if there's any hint of compromise.
Speaker 1 (07:48):
Okay, that ties back nicely to attestation. What about identity?
If someone steals user credentials, how do these techniques help.
Speaker 3 (07:55):
That's where runtime zero trust protection of identity exploits fits in.
Even if an attacker gets valid user credentials, the app
and device attestation act as an extra security layer. If
the app isn't genuine or the device looks compromised, the
login attempt can be blocked even with the right password.
It makes credential stuffing attacks much harder, and continuously monitoring
(08:17):
for signs of identity abuse like unusual log in patterns
or locations, becomes a critical part of runtime security.
Speaker 1 (08:24):
It really sounds like building multiple layers of defense. But
you know breaches happen. What about preparing for that inevitable
uh oh moment?
Speaker 3 (08:33):
That's breach readiness and service continuity. Your incident response plan
can't just cover your own systems anymore. What happens if
a third party API your approalyizon gets breached, say a
service to your healthcare app uses right, you need pre
defined processes for quickly rotating any keys, secrets, or certificates.
Associated with that third party, and you need ways to
rapidly block specific compromised devices or user accounts from accessing
(08:56):
your services. It's all about assuming a breach will happen
and being ready to contain fast.
Speaker 1 (09:00):
Okay, those run time protections sound vital for active defense,
but let's shift back to the code itself making it
harder to understand than the first place, code obfuscation. How
does that fit in?
Speaker 3 (09:13):
Right obfuscation? It's not about making the code impossible to break,
because realistically, nothing is truly unbreakable given enough time and resources.
It's about making reverse engineering and tampering so incredibly difficult,
so time consuming, so expensive that it deters almost everyone
except maybe the most dedicated state sponsored type attackers.
Speaker 1 (09:35):
You're just raising the bar.
Speaker 3 (09:36):
Significantly, exactly, making it not worth the effort for most
adversaries and for mobile apps, probably the most effective route
is bytecode obfuscation using LVM. Bytecode is common because it's
pretty universal, widely adopted.
Speaker 1 (09:48):
LVM being that compiler infrastructure.
Speaker 3 (09:50):
Yeah, it's a set of tools developers used to build
and optimize software, and it offers powerful ways to manipulate
the code during the build process.
Speaker 1 (09:58):
So what specific technique are we talking about within LVM
based obfuscation, Say for iOS.
Speaker 3 (10:04):
Okay, several key ones. Symbol renaming is basic but effective.
Changing meaningful names like process payment or user credentials to
short meaningless ones like A one, B two or C
three D four makes reading the disassembled code a nightmare.
Speaker 1 (10:20):
Right.
Speaker 3 (10:20):
Then, string encryption any sensitive text strings hard coded in
the app, API end points, keys, error messages get scrambled.
They're only decrypted right when the app needs to use
them in memory.
Speaker 1 (10:31):
Okay.
Speaker 3 (10:31):
Control flow obfuscation, sometimes called flattening, is a big one.
It completely rewrites the logical flow of functions. Instead of
a clear sequence, it breaks the function into lots of tiny,
scattered blocks that jump around nonsequentially. It's incredibly confusing for
static analysis tools and human reverse engineers.
Speaker 1 (10:49):
Sounds like it would be.
Speaker 3 (10:50):
And finally, things like dummy code insertion injecting bits of
code that look like they do something important but actually don't.
It's just noise designed to mislead anyone trying to figure
out the real life.
Speaker 1 (11:00):
So you're combining these different methods to create layers of
confusion exactly.
Speaker 3 (11:04):
Each one tackles a different aspect of reverse engineering. Symbol renaming,
hit static analysis. String encryption protects embedded secrets, control flow
flattening foils both static and dynamic analysis. It's a multi
pronged defense within the code itself.
Speaker 1 (11:19):
It all sounds like a really smart security move. But
this is where we hit that Apple issue again, right,
the calculated ambiguity.
Speaker 3 (11:26):
Precisely, this is where it gets tricky. Apple's App Store
review guidelines. They don't explicitly ban offustation if it's for
legitimate security reasons, but and this is the big butt,
they strictly forbid any code that seems intended to hide
functionality or bypass the review process.
Speaker 1 (11:42):
And aggressive obfuscation might look like that to their automated tools.
Speaker 3 (11:46):
It certainly can. Apple is primarily looking for malware or
apps trying to hide shady behavior. So if your heavy
obfuscation makes it hard for their scanners to understand what
your app does, it can get flagged and rejected.
Speaker 1 (11:58):
So specific guidelines like two point three point one about
hidden features or five point six on the Developer Code
of Conduct they create this gray area.
Speaker 3 (12:06):
They really do, and based on developer reports, common rejection
reasons often mention things like obfuscated code selector.
Speaker 1 (12:14):
Mangling selector mangling? What's that?
Speaker 3 (12:17):
That's specific to objective C, where method names selectors are modified.
Because objective C relies heavily on its dynamic run time,
messing with those selectors seems to be a major red
flag for Apple's review. It directly interferes with how the
language works.
Speaker 1 (12:32):
Interesting, and you mentioned it's not always the developer's own
code causing the.
Speaker 3 (12:36):
Issue, right, Sometimes the rejection is triggered by obfuscation within
a third party SDK that the developer integrated, a library
for analytics or ADS or whatever. Suddenly the developer has
this huge headache of figuring out which library is causing
the problem. It's a real supply chain risk.
Speaker 1 (12:53):
And just to be clear, the standard tools in xcode,
like telling it to strip symbols or remove dead code. Yeah,
that's not the same as this kind of obfuscation.
Speaker 3 (13:01):
No, not at all. They are primarily optimizations. They shrink
the app size, maybe offer a tiny bit of obscurity,
but they won't stop a determined reverse engineer. And remember
Apple deprecated bitcodes starting with xcode fourteen. That basically put
more the responsibility for the final compiled binary security back
onto the developer's shoulders.
Speaker 1 (13:22):
So if xcode isn't providing this level of protection, what
are the realistic options for developers, especially those in regulated
field who genuinely need strong obfuscation and anti tampering.
Speaker 3 (13:32):
Well, they often have to look at third party solutions.
There's a spectrum. You have commercial tools like guard Squar's
ix guard, which is known for really deep compiler level protection, polymorphism,
and features like RASP.
Speaker 1 (13:46):
That's pronounced RASP right runtime application.
Speaker 3 (13:48):
Self protection exactly RASP or RASP. Then there are others
like z Imperium suite focusing on broad threat detection, or
app doome, which offers a sort of no code approach
where security features are fused onto the app binary. On
the other end, if you have serious in house compiler expertise,
there are open source options based on LVM like obuskater LLVM.
Speaker 1 (14:07):
But that sounds like a heavy lift to maintain yourself.
Speaker 3 (14:10):
It absolutely is that build versus by decision often leans
towards commercial solutions because building and maintaining a custom secure
compiler tool chain requires very specialized skills and ongoing effort
to keep up with new OS versions, new threats, and
apples changing rules.
Speaker 1 (14:27):
Okay, this makes sense for banks, fintech, healthcare. This isn't
just good practice, it's often tied to legal and regulatory demands.
How do these techniques map to compliance standards?
Speaker 3 (14:38):
Yeah, that's crucial. The OAP Mobile Application Security Verification Standard
MAUNSVS is a key benchmark here, specifically the mass vs.
Resilience category. That category defines controls for apps handling sensitive data,
apps that absolutely need hardening against reverse engineering and tampering.
Speaker 1 (14:54):
What are some of those mass vs. Resilience controls that
line up with what we've.
Speaker 3 (14:58):
Discussed, Well, they cover platform integrity, things like effective jail
break or root detection, Anti tampering mechanisms like checking the
app binaries integrity to make sure it hasn't been modified.
Then anti static analysis techniques that's your symbol renaming, string encryption,
control flow flattening basically protecting your intellectual property and any
embedded secrets. An anti dynamic analysis things like detecting debuggers
(15:21):
attached to the app, preventing code hooking, even blocking screen
capture to stop people inspecting the app while it's running.
Speaker 1 (15:27):
So, for a fintech app, how does this relate to
something like PCIDSS.
Speaker 3 (15:33):
PCIDSS requirements six is about developing and maintaining secure systems
and applications. While it might not explicitly say use code
op puscation, the intent is there. If an attacker can
easily reverse engineer your fintech app, figure out how it
builds API calls, find encryption keys, or tamper with payment logic,
you're clearly not meeting requirements six. So demonstrating you have
(15:54):
these protections is vital for due diligence, especially if a
breach occurs, and for.
Speaker 1 (15:59):
Healthcare apps under the half test security rule.
Speaker 3 (16:01):
Hypa's security rule mandates appropriate administrative, physical, and technical safeguards
for EPHI code obfuscation and RASP. RASP are absolutely technical
safeguards in this context. They directly protect against attackers understanding
how your app handles sensitive health data, finding hard coded credentials,
(16:21):
or bypassing authentication mechanisms protecting that data.
Speaker 1 (16:25):
So just relying on the OS security like Apple Sandbox,
that's not enough for HYPA compliance at the application level.
Speaker 3 (16:32):
No, it's generally considered insufficient on its own. Hyper requires
safeguards for the data wherever it is, including within the
application logic itself. Platform security protects the device you need
to protect the app okay.
Speaker 1 (16:44):
So, given all these technical needs and the tricky platform rules,
what's the strategic advice for developers trying to implement this
stuff effectively?
Speaker 3 (16:51):
A tiered approach usually makes the most sense. Align the
level of protection with the risk profile of the app.
Your highest risk apps may be core banking or patient
portal apps might mandate top tier commercial solutions covering full
mass v's resilience. Medium risk apps might focus obfuscation just
on the most critical code paths. Lower risk apps might
(17:13):
rely more on the native hardening.
Speaker 1 (17:15):
X code offers and integrate it early.
Speaker 3 (17:17):
Definitely, bake these protections into your CICD pipeline from the start. Test,
constantly keep an unobfuscated version for your own debugging, automate
the configuration as much as possible, and keep a close
eye on performance impact and.
Speaker 1 (17:29):
The million dollar question, how do you talk to Apple
about this? To avoid getting.
Speaker 3 (17:33):
Rejective, Transparency and careful wording are key use that notes
for review section and app store connect Critically, don't use
the word obfuscation.
Speaker 1 (17:42):
Really, what should you say?
Speaker 3 (17:43):
Instead, frame it entirely in terms of security compliance and
user safety. Something like this application incorporates advanced security controls
necessary to comply with mentioned relevant regulation eg. Financial services
regulations and protect sensitive user data from tampering and reverse engineering,
consistent with industry best practices like the OAS Mobile Application
(18:06):
Security Verification Standard. As you add something like these controls
are integral to our security architecture and do not obscure
or hide any application functionality from review.
Speaker 1 (18:17):
Ah, so you're shifting the narrative. It's not about hiding things.
It's about responsible security engineering required by regulations or best
practices exactly.
Speaker 3 (18:25):
You're explaining why it's there and reassuring them it's not
meant to deceive the review process.
Speaker 1 (18:29):
It really does sound like a continuous cat and mouse game.
Looking ahead, what do you see on the horizon for
mobile app security in this whole protection space.
Speaker 3 (18:38):
Well, one thing is the rise of aipowered tools. We're
starting to see large language models that are getting pretty
good at analyzing and even explaining code that could potentially
make reverse engineering easier or at least faster.
Speaker 1 (18:50):
In the future, which makes static obfuscation less effective over time.
Speaker 3 (18:55):
Potentially. Yes, it really underscores the importance of techniques that
are dynamic and polymorphic, protections that change themselves, that present
a moving target rather than just relying on making the
code statically hard to read. Things like RASP RASP become
even more important and fundamentally. While platform vendors like Apple, Google, Samsung,
(19:16):
Huawei they'll keep improving device security, the core responsibility for
securing the applications logic, it's secrets, its behavior that's going
to stay with the developer. Organizations can't really afford to
sit back and wait for the platforms to solve everything.
They need to take ownership of their apps integrity right now.
Speaker 1 (19:32):
This has been incredibly insightful. We've really unpacked the layers
here from the man at the end problem through specific
defenses like app attestation and dynamic certificate pinning and navigating
that complex world of code obfuscation and apples guidelines.
Speaker 3 (19:47):
Yeah, understanding all these different layers platform security, in app hardening,
API protection is just critical knowledge for any mobile developer
or security professional today. I think the main takeaway is
that strong mobile security isn't a feature you ad at
the end. It's an ongoing, integrated process. It requires specific
technical tools and frankly, a smart approach to dealing with
(20:09):
the platform ecosystems.
Speaker 1 (20:11):
Absolutely, this knowledge helps you build, not just apps that work,
but apps that are genuinely secure and ready for what's
coming next in the digital landscape. This program was made
possible by human sources and assisted with AI