Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome back to upwardly Mobile, where we tackle the big
issues in mobile app and API security. Today we're jumping
straight into well, the huge security failure. And this wasn't
about sophisticated hackers or zero days. No, this was about
sheer negligence, easily avoidable stuff. We're talking about a breach
impacting over one point eight million users across more than
(00:21):
nine hundred mobile apps. I think health, finance, education apps.
Speaker 2 (00:25):
Yeah, it's a stark reminder of how basic security can
fail spectacularly. The core issue missed configuration involving a very
popular back end service, Google's Firebase.
Speaker 1 (00:36):
Right Firebase for anyone needing a quick refresher. It's a
back end as a service, a back ass platform. It
give developers ready made back end tools, databases, authentications, storage,
speeds things up massively.
Speaker 2 (00:47):
Immensely helpful. Yes, but as this case shows, it carries
significant risk if it's not managed correctly from a security perspective.
Speaker 1 (00:53):
So our goal today is to really unpack the reports
on this. What data got out, why are these basic
security rules still.
Speaker 2 (00:59):
Being missed, and crucially, how did attackers automate this? How
did they turn simple misconfigurations into well a mass data
harvesting operation.
Speaker 1 (01:07):
Okay, let's start with the scale. One point eight million
users is a lot, but what kind of data are
we talking about. That's where the real negligence shows.
Speaker 2 (01:15):
Right Absolutely, the exposed databases held incredibly sensitive info. You've
got the usual suspects usernames, emails, phone numbers, bad enough,
but the sources confirmed things like full billing details, and
the worst part, plain text passwords.
Speaker 1 (01:31):
Lane text passwords just sitting.
Speaker 2 (01:33):
There exactly, no encryption, no hashing, no salting, just raw
readable passwords available to anyone who stumbled upon the open database.
Speaker 1 (01:41):
It's almost twenty twenty five, I mean, how is this
still happening? Experts are calling it reckless, inexcusable.
Speaker 2 (01:46):
Why because it breaks the absolute cardinal rule of password security,
something taught on day one. Passwords must be hashed in
salt period. Storing them unencrypted means either a complete lack
of basic training or just unbelievable carlessness. And think about
password reuse. You know, users use the same password everywhere,
so this doesn't just risk those nine hundred apps, It.
Speaker 1 (02:08):
Risks their bank accounts, their email, everything nicely.
Speaker 2 (02:10):
It shatters user trust, not just in these apps, but
potentially in the whole ecosystem.
Speaker 1 (02:14):
Okay, so that brings us to the root cause. Multiple
reports called this a preventable disaster, not a complex hack.
So what was the actual failure? What allowed attackers to
just walk in?
Speaker 2 (02:26):
What's fascinating here is it often comes down to how
Firebase is set up, or rather not set up securely.
It's powerful, great for speed, but it has this known issue.
Developers frequently misconfigure the access rules, especially during testing.
Speaker 1 (02:41):
Right the defaults are wide.
Speaker 2 (02:42):
Open, exactly when you first set up a Firebase real
time database. The default rules can be extremely permissive, like
rules dot read true, write true.
Speaker 1 (02:53):
Meaning anyone can read anyone can write correct.
Speaker 2 (02:56):
That's fine for maybe five minutes of initial testing, but
it needs to be locked down immediately with proper authentication
rules like read off the knull, ensuring only logged in
users can access data.
Speaker 1 (03:06):
But developers seem to leave it in this test mode.
The sources mentioned Firebase warns this expires after thirty days,
so how does it stay open in production?
Speaker 2 (03:15):
We see a couple of things happening. First, some developers
actually bypass that thirty day limit deliberately. Why convenience setting
up granular security rules takes time, adds friction under pressure,
security gets.
Speaker 1 (03:27):
Pushed aside, so they just extend the insecure period indefinitely.
Speaker 2 (03:31):
This seems like it yeah, or second scenario, especially with
smaller teams or older projects. The app gets deployed and
then nobody checks the cloud can figure again it's forgotten.
The cost of that friction of slowing down to do
security right feels higher than the risk until something like
this happens.
Speaker 1 (03:49):
Right, So it feels like Blaine falls on both sides.
Developers for the negligence, definitely, but also Firebase itself. The
sources seem critical of the platform allowing such easy misconfigure.
Speaker 2 (04:00):
That's a really critical point being raised if cloud tools
make it this easy for attackers. Basically, if the default
or easiest path for a developer leads straight to a
massive vulnerability, then maybe we should call it liability engineering,
not innovation.
Speaker 1 (04:13):
Liability engineering.
Speaker 2 (04:14):
That's a strong term, but maybe necessary. If the default
isn't secure, if you have to actively work harder to
be secure than insecure, then the platform design itself has issues.
We need secure by default setups make developers consciously choose
to open access, not accidentally leave it gaping.
Speaker 1 (04:32):
Okay, so we have insecure databases just sitting there. But
how did attackers find it exploit nearly a thousand of
them so quickly? Automation Obviously.
Speaker 2 (04:41):
Here's where it gets really interesting and frankly, quite clever
in a scary way. The key wasn't attacking Firebase directly.
It was using automated scanning tools that pulled information right
out of the app package itself, the thing you download
from the app store.
Speaker 1 (04:55):
Wait, so the attack vector wasn't the live app security logic,
but the static files bundled in the installer.
Speaker 2 (05:01):
Yes, a fundamental vulnerability in how many apps are built.
Attackers use tools, sometimes open source ones like open Firebase
to do this at scale, no manual checking needed. It
starts with step one ATK analysis. APK is the Android
application package file. They use reverse engineering tools like jadx
to basically unpack the APK.
Speaker 1 (05:20):
And what are they looking for? Inside?
Speaker 2 (05:22):
They're hunting for specific metadata hard coded in configuration files
files like resvalue strings, dot xml or crucially Google Desk
Services dot json.
Speaker 1 (05:32):
Ah, the file that links the app to the Firebase project.
Speaker 2 (05:35):
Exactly inside they find the fire based project IDs, apikeys,
Google app IDs. These are the identifiers Firebase uses.
Speaker 1 (05:43):
So by putting these directly in the app bundle.
Speaker 2 (05:45):
The developer essentially hands the attacker the address and the
credentials needed to talk directly to the back end database.
Speaker 1 (05:51):
Which makes the next step terrifyingly simple. You have the address,
you just knock precisely.
Speaker 2 (05:55):
That's step two, real time database enumeration. The scanner takes
that extracted project id and sends a simple HTTP get
request something like Curl dashes HTTPS, dot projected, dash default dash,
RTDB dot FIREBASEOYO, dot com dot json standard.
Speaker 1 (06:12):
Stuff, and if the database is open, bing go.
Speaker 2 (06:14):
The server responds with HTTP two hundred ok and maybe
even the first chunk of JSON data. That two hundred
OK is the signal come on he and the data
is fine. Essentially yes, the scanner flags it as public
and starts scraping everything, all the PII billing info and
those plaintext passwords we discussed at machine speed.
Speaker 1 (06:30):
The sources mentioned something about a two step look up
making this even more effective globally.
Speaker 2 (06:35):
Ah. Yes, that shows the sophistication of these automated tools.
Fire based databases can be hosted regionally, not just the
US default. So if the first gettee request fails because
say the database is in Europe or Asia. The error
message firebase sends back often reveals the correct regional endpoint URL.
Speaker 1 (06:54):
Oh wow.
Speaker 2 (06:55):
The scanner doesn't give up. It parses that error, grabs
the correct regional UL, and automatically tries again.
Speaker 1 (07:02):
So it finds almost every misconfigured instance, no matter where.
Speaker 2 (07:05):
It hosted, exactly, no guesswork, pure automation. It turns potentially
isolated leaks into an instant global harvesting operation.
Speaker 1 (07:13):
Did they stop at the database or did those hard
coded keys open other doors.
Speaker 2 (07:16):
They kept going? That brings us to step three, remote
config exploitation. Remember they still have the Google Peaky and
Google Lipid from the APK. They use these to make
a post request to firebases. Remote config API developers use
this to push config updates to apps without a full
app store release.
Speaker 1 (07:35):
Okay, so what happens if that's not secured.
Speaker 2 (07:36):
Either a successful two hundred ok response there means unauthenticated
access to potentially more sensitive configuration data, and this is
where some really high value targets were apparently found, things
like credentials for third party services, private API keys.
Speaker 1 (07:53):
Reports mentioned some shocking fines.
Speaker 2 (07:54):
Yeah, things like expose storage buckets with user ID photos
millions of them, huge privacy breach, but also more clear
text passwords and in the absolute worst cases aws root access.
Speaker 1 (08:06):
Tokens, aws root tokens. That's not just fire based data anymore.
That's potentially the keys to the entire kingdom, the whole
cloud infrastructure.
Speaker 2 (08:15):
It is. It shows how one mistake, one misconfiguration in
a BOSS platform can cascade into a complete system compromise.
It's not just about that one database anymore.
Speaker 1 (08:24):
So connecting this all together, this isn't just about Firebase,
is it. It points to a wider systemic issue in mobile
app development.
Speaker 2 (08:31):
It really does. The convenience of these powerful platforms is fantastic,
but it has to be balanced with security by default.
That's one part. But the other part is on the
development side. We absolutely need mandatory targeted security training for
mobile developers iOS, Android, Harmony, s, cross platform like Flutter, React, native,
doesn't matter.
Speaker 1 (08:52):
They need to understand the implications of what they put
inside the app bundle, right, Yeah, what happens when someone
decompiles their APK or IPA.
Speaker 2 (08:59):
Precisely they need to understand cloud configurations, secure defaults the
risks of hard coding anything sensitive.
Speaker 1 (09:06):
And the platform providers Google in this case, more accountability
needed there.
Speaker 2 (09:09):
Absolutely, if the tooling allows or even encourages by passing
security warnings like that thirty day test mode expiration, then
the platform itself needs stricter defaults. Make security the easy path,
not the hard one.
Speaker 1 (09:22):
Security needs to be baked in from the start, design phase, deployment,
ongoing monitoring, not an afterthought.
Speaker 2 (09:28):
Couldn't agree more proactive risk assessment, secure configurations from day
one regular checks. Honestly, if a proper security review had
happened before these apps launched, this whole breach impacting over
nine hundred apps could likely have been prevented.
Speaker 1 (09:43):
So the big takeaway here, Yes, massive breaches can start
from simple configures, that's not new, but the exploitation is
now hyper automated. A small leak becomes a massive hemorrhage
almost instantly because scanners are constantly probing these vulnerabilities at scale.
Speaker 2 (09:59):
Which leads to a really important question for every mobile
developer listening right now. Think about it. The keys to
unlock millions of records we're just sitting inside the app package,
the APK file. So where does the primary responsibility line. Now,
is it just about tightening server side security rules or
do we fundamentally need to stop putting any sensitive keys
(10:19):
or identifiers in the client side bundle altogether.
Speaker 1 (10:22):
Moving towards maybe more dynamic runtime secret management instead of
static and fig files.
Speaker 2 (10:27):
That seems to be the direction things need to head.
Relying less on static secrets embedded in the app seems critical.
Speaker 1 (10:33):
A lot for developers and security teams to think about there.
Thanks for breaking that down for us, and just to
be fully transparent. The analysis you heard today synthesizing those
technical reports was put together by our human team using
AI tools to help structure and process the complex information quickly.
Speaker 2 (10:51):
Stay vigilant out there, keep those configurations locked down. It
really matters.