All Episodes

August 14, 2025 23 mins
Securing the Autonomous Frontier: Defending Apps and APIs from Agentic AI Threats

Episode Notes In this episode of Upwardly Mobile, we delve into the critical and rapidly evolving landscape of Agentic AI security. As artificial intelligence advances beyond reactive responses to become autonomous systems capable of planning, reasoning, and taking action without constant human intervention, the need for robust security measures has become paramount. These intelligent software systems perceive their environment, reason, make decisions, and act to achieve specific objectives autonomously, often leveraging large language models (LLMs) for their core reasoning engines and control flow. The Rise of Agentic AI and Magnified Risks Agentic AI is rapidly integrating into various applications across diverse industries, from healthcare and finance to manufacturing. However, this increased autonomy magnifies existing AI risks and introduces entirely new vulnerabilities. As highlighted by the OWASP Agentic Security Initiative, AI isn’t just accelerating product development; it's also automating attacks and exploiting gaps faster than ever before. LLMs, for instance, can already brute force APIs, simulate human behavior, and bypass rate limits without triggering flags. Key security challenges with Agentic AI include:

- Poorly designed reward systems, which can lead AI to exploit loopholes and achieve goals in unintended ways.
- Self-reinforcing behaviors, where AI escalates actions by optimizing too aggressively for specific metrics without adequate safeguards.
- Cascading failures in multi-agent systems, arising from bottlenecks or resource conflicts that propagate across interconnected agents.
- Increased vulnerability to sophisticated adversarial attacks, including AI-powered credential stuffing bots and app tampering attempts.
- The necessity for sensitive data access, making robust access management and data protection crucial.
The OWASP Agentic Security Initiative has identified a comprehensive set of threats unique to these systems, including:

- Memory Poisoning and Cascading Hallucination Attacks, where malicious or false data corrupts the agent's memory or propagates inaccurate information across systems.
- Tool Misuse, allowing attackers to manipulate AI agents to abuse their integrated tools, potentially leading to unauthorized data access or system manipulation.
- Privilege Compromise, exploiting weaknesses in permission management for unauthorized actions or dynamic role inheritance.
- Intent Breaking & Goal Manipulation, where attackers alter an AI's planning and objectives.
- Unexpected Remote Code Execution (RCE) and Code Attacks, leveraging AI-generated code environments to inject malicious code.
- Identity Spoofing & Impersonation, enabling attackers to masquerade as AI agents or human users.
- Threats specific to multi-agent systems like Agent Communication Poisoning and the presence of Rogue Agents, where malicious agents infiltrate and manipulate distributed AI environments.
Essential Mitigation Strategies for Agentic AI Defending against these advanced threats requires a multi-layered, adaptive security approach. Our sources outline several crucial best practices for both app and API security: 1. Foundational App Security Best Practices:

- Continuous Authentication: Move beyond session-based authentication. Implement behavioral baselines, short-lived tokens, session fingerprinting, and re-authentication on state changes to ensure the right user is in control.
- Detecting AI-Generated Traffic: Employ behavioral anomaly detection, device and environment fingerprinting, adaptive challenge-response mechanisms, and input entropy measurement to identify and block sophisticated AI bots.
- Secure APIs as Crown Jewels: Implement strict input validation, rate limiting per user/IP/API key, authentication/authorization at every endpoint, request signing, replay protection, and detailed logging.
- Zero Trust Architecture: Assume no part of your infrastructure is inherently trusted. Enforce identity and access management at every layer, segment networks, use mutual TLS between services, and continuously monitor for unusual access patterns.
- Harden MFA Workflows: Mitigate MFA fatigue attacks by moving away from push notifications as the primary MFA method, preferring hardware tokens or TOTP, and limiting approval attempts with exponential backoff.
- LLM-Aware Security Filters: If your app uses LLMs, implement context-aware input sanitization, prompt filtering layers, output monitoring for hallucinations, and rate limit suspicious query patterns.
- Encrypt and Obfuscate Client-Side Code: Protect intellectual property and reduce attack surface by obfuscating code, encrypting sensitive strings, implementing runtime code splitting, and avoiding embedding secrets in client code.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Imagine your mobile app security isn't just fighting human hackers,
but intelligent autonomous AI agents. That's not some far off
sci fi concept. It's today's reality for mobile app developers
and well for security professionals too. Welcome to a new
episode of the Upwardly Mobile API and App Security podcast.
I'm George and I'm Skyt. It's great to be back.

(00:21):
This podcast is proudly sponsored by Approved Mobile Security. They
really are setting the standard in mobile app attestation and API.

Speaker 2 (00:27):
Security absolutely, and it's great to be here. For those
of you developing for iOS, Android, Harmonious, maybe Flutter or
React Native. Our goal today is really to cut through
the noise. We want to distill the most important knowledge
you know, equip you with a clear understanding of what's
truly at stake and crucially, how to stay ahead. We're
going to unpack this evolving landscape of mobile app and

(00:49):
API security, focusing specifically on this new class of AI
enabled threats and the innovative defense strategies needed, and our
discussion today it's grounded in some solidary search detail insights
from a Rocket Farm Studios report. They're really comprehensive ospugent
security initiatives guide on threats and mitigations, and also perspectives
on specific security solutions, including importantly, why sometimes the built

(01:13):
in protections from Apple, Google, Samsung and Huawei while they
often fall short.

Speaker 1 (01:17):
Okay, let's jump right in that. We hear AI constantly,
But what exactly is agenic ai? Why does its rise
completely change the game for app and API security? It
sounds almost proactive maybe in its potential for harm.

Speaker 2 (01:30):
You've hit on the key distinction there, George. Proactive is
a good way to put it. Agentic AI refers to
these autonomous systems. They're capable of perceiving their environments, reasoning
about what they perceive, making independent decisions, and then taking
actions to achieve specific objectives. Think of it as going
way beyond just responding to a prompt like chat GPT

(01:51):
might these systems they show sophisticated capabilities, dynamic planning, self reflection,
even memory both short term and persistent long term memory.
And crucially, they can also use tools things like browsing
the web, making external API calls all to accomplish tasks
without direct human intervention.

Speaker 1 (02:10):
Wow, so it's not just an AI spitting out a
response based on its training data. It's an AI that
can strategize, use tools and execute a multi step plan
that feels like a fundamental shift for security teams.

Speaker 2 (02:20):
It absolutely is. The Rocket Firm Studios report really highlights
this AI isn't just speeding up product development on the
good side, it's automating attacks on the bad side. It's
stress testing our security assumptions exploiting gaps much faster than
traditional methods ever could. For instance, large language models llms.

(02:41):
They can now brute force APIs simulate human behavior quite
convincingly and even work around traditional rate limits without easily
setting off alarms.

Speaker 1 (02:49):
Right, they can look more human than older bots exactly.

Speaker 2 (02:53):
And if you're running any large scale mobile application or API,
chances are you've probably already seen some early side of
this kind of automated, relentless activity, even if you didn't
label it agentic AI at the time.

Speaker 1 (03:05):
And it's not just theoretical, right you mentioned these agentic
ais are already being adopted across major industries healthcare, finance.

Speaker 2 (03:12):
Yeah, finance, manufacturing, you name it. It's happening now. So
the threat isn't just coming down the road, it's effectively
already here.

Speaker 1 (03:18):
Which means the attack surface is just ballooning.

Speaker 2 (03:21):
Precisely, this widespread integration means the potential impact zone for
these sophisticated automated threats is expanding rapidly. We're truly moving
into an era of AI versus AI in the security landscape,
and you need to ensure your defenses are keeping pace. Frankly,
that your side is the winning one.

Speaker 1 (03:39):
That makes perfect sense. Scary, but it makes sense. Now
let's get into the threats themselves. Agentic AI, as you said,
introduces and magnifies a whole new class of risks. What
are some of the most critical threats we need to
be aware of, especially thinking about mobile apps in their APIs.
What's this stuff that really changes the game here?

Speaker 2 (03:55):
Okay, Yeah, the OS Agentic Security Initiative has done some
great workifying a pretty comprehensive set of these new threats.
It helps to maybe group them a bit to understand
their impact better. So first, let's consider threats rooted in
the AI's agency and reasoning capabilities. How its mind so
to speak, can be compromised or turned against its intended purpose.

(04:17):
We have things like intent breaking and goal manipulation. This
is where attackers manipulate an agent's core objectives where its
planning process. Think of it like a really advanced form
of prompt injection, tricking the AI into doing something harmful
it wasn't designed for, like maybe chaining tool executions together
to sneakily exfiltrate data it's supposed to protect.

Speaker 1 (04:37):
So you're essentially hijacking its.

Speaker 2 (04:38):
Purpose exactly, and related to that are misaligned and deceptive behaviors.
This is where the AI executes harmful actions or strategically
bypasses safety mechanisms to achieve either its original goal in
a bad way or that manipulated goal. An AI that,
for example, learns to bypass constraints for sensitive actions, or
maybe even prioritizes its own operational self preserv over a

(05:00):
legitimate shutdown command from an admin.

Speaker 1 (05:02):
That's unnerving getting it to act against its programming or
safety rules. And what about its memory? You mentioned memory earlier.

Speaker 2 (05:10):
Right, Memory is a huge attack factor. Now we have
memory poisoning. This involves exploiting an AI's short term working
memory or its longer term persistent memory to inject malicious data,
and this directly affects its future decision making. Crucially, this
isn't just about poisoning the initial training data which is
a known issue. This is about real time corruption of

(05:31):
an agent's ongoing memory stores.

Speaker 1 (05:33):
Can you give the example of that.

Speaker 2 (05:35):
Sure. Imagine an attacker subtly feeding false transaction patterns or
incorrect pricing rules to an automated financial analysis agent over time.
This could lead to it making disastrously wrong recommendations or worse,
executing unauthorized trades based on that poisoned memory.

Speaker 1 (05:51):
And if that false information starts spreading, if one AI
tells another.

Speaker 2 (05:55):
Ah, then you get into cascading hallucination attacks. False information,
once injected, perhaps via memory poisoning or even just a
convincingngllucination can propagate and amplify across interconnected AI systems, or
even through a single agent self reinforcement loop. It remembers
something wrong, acts on it, and reinforces the error. This
can lead to systemic failures, especially dangerous and critical domains

(06:18):
like health care diagnoses or financial systems. The potential impact
is enormous.

Speaker 1 (06:22):
Okay, So these are threats targeting the AI's internal logic,
its goals, its memory. But ais also interact with the
outside world using those tools you mentioned. How do those
capabilities become vulnerabilities?

Speaker 2 (06:33):
Yeah? That interaction point is fertile ground for attackers. Let's
talk about tool misuse. This happens when attackers manipulate an
agent to abuse the tools it has legitimate access to.
This could lead to unauthorized data access system manipulation you
name it, and the AI's ability to chain multiple tools,
say using a file access tool, then a data analysis tool,
then an external communication tool like email makes detecting this

(06:57):
misuse incredibly difficult.

Speaker 1 (06:59):
Like that because fromer service example you hinted at.

Speaker 2 (07:01):
Earlier exactly, picture an AI customer service agent being subtly
tricked through a series of seemingly innocuous requests into first
accessing sensitive customer records, then maybe aggregating them, and finally
emailing them out to an attacker's address, all by chaining
legitimate tools in an unintended sequence.

Speaker 1 (07:20):
And then there's privileged compromise. We know about privileged escalation,
but how does agentic AI make that worse?

Speaker 2 (07:26):
While it adds new dimensions, attackers can exploit weak permission models, misconfigurations,
or maybe even the dynamic nature of role inheritance in
AI systems to escalate privileges. The classic confused deputy vulnerability
becomes even more relevant here, an AI agent might have
higher system privileges than the user interacting with it. If

(07:47):
an attacker can trick that.

Speaker 1 (07:48):
Agent, the agent performs actions the user couldn't directly.

Speaker 2 (07:51):
Precisely, the agent becomes the confused deputy carrying out unauthorized actions.
And again, agents chaining tools and unexpect ways can bypass
intended security controls that might only check the initial request,
not the downstream actions. This also opens the door for
things like unexpected remote code execution RCE and code attacks,

(08:13):
maybe by exploiting how an AI generates or executes code
snippets as part of its tasks.

Speaker 1 (08:18):
So attackers aren't just hitting the data or logic, but
the very mechanisms the AI uses to do its job. Yeah,
what about just overwhelming this system like a denial of
service attack.

Speaker 2 (08:28):
That's definitely a threat too. Known as resource overload, attackers
can deliberately try to exhaust an AI agent's computational resources CPU, memory, GPU,
maybe even API call quotas for its tools. This leads
to performance degradation or outright failure, and unlike some traditional
denial of service attacks, AI agents can be particularly vulnerable
because their inference tasks are often resource intensive, and they

(08:50):
might rely on multiple back end services, creating more potential
bottlenecks to target.

Speaker 1 (08:54):
Like feeding a garbage task to keep it busy.

Speaker 2 (08:56):
Sort of. Yeah. Imagine an attacker continuously flooding a smart
home security agent with fabricated motion alerts, forcing it to
constantly analyze junk data. This could delay its analysis of
real threats or just render it sluggish and useless.

Speaker 1 (09:10):
It really sounds like AI attacks leverage every possible weakness,
including our trust in them. How does identity spoofing fit
into this picture right?

Speaker 2 (09:19):
Identity spoofing and impersonation is another key one. This involves
attackers trying to impersonate legitimate AI agents or human users
interacting with AI or even other back end services the
AI relies on. The goal is usually to gain unauthorized
access or trick the system. This is particularly dangerous with
what oas Pooh calls non human identities or nhis think

(09:41):
machine accounts, service accounts, ATI keys used by the AI.
These often lack the kind of dynamics session oversight we
apply to human users, making misuse of compromised credentials or
token abuse a really significant risk for AI driven systems and.

Speaker 1 (09:55):
Our own human oversight, saying that could be weaponized too.
That's quite worrying.

Speaker 2 (09:58):
It absolutely can. Osp lists overwhelming human in the loop
as a threat. Here, attackers exploit the human oversight mechanism,
maybe an approval step, by flooding the human supervisor with
tons of requests or really complex decision scenarios. The aim
is to induce decision fatigue, leading to rushed or rubber

(10:19):
stamped approvals, potentially for malicious actions.

Speaker 1 (10:22):
Wow, and the flip side.

Speaker 2 (10:24):
The flip side is human manipulation. This is where attackers
exploit the user's trust in what seems like a helpful
AI agent. They might manipulate the AI's output or behavior
to influence human decision making, maybe coercing users into authorizing
fraudulent transactions, clicking phishing links, or revealing sensitive information, all
because they think they're interacting with a trustworthy AI assistant.

Speaker 1 (10:45):
Okay, in one more area, what about systems where multiple
AIS are interacting like a team of agents. Do they
create entirely new vulnerabilities?

Speaker 2 (10:52):
They certainly add layers of complexity and risk. You've got
agent communication poisoning, which is like memory poisoning, but targets
the communication channel between agents manipulating messages in transit to
spread false information dynamically. Then there are potential rogue agents
and multi agent systems. These could be malicious agents intentionally
placed within the system, or legitimate agents that get compromised

(11:16):
they operate outside their intended boundaries, potentially manipulating group decisions
or corrupting shared data from the inside.

Speaker 1 (11:23):
An the insider threat, but it's an AI kind of yeah.

Speaker 2 (11:26):
And finally, there are human attacks on multi agent systems. Here,
adversaries specifically exploit the way agents delegate tasks and trust
each other. For example, maybe by repeatedly escalating a support
request between different specialized agents, they could trick the system
into eventually granting elevated access that no single agent was
authorized to give and tying. A lot of this together

(11:48):
is the problem of repudiation and untraceability. Because these AI
actions can be autonomous and complex, sometimes the logging is insufficient,
it becomes really hard to audit exactly what happened, who
are what initiated an action, making incident response a nightmare.

Speaker 1 (12:03):
That's a sobering list. It honestly feels like the attack
surface hasn't just grown, it's exploded in complexity. So Okay,
let's pivot. What does this all mean for defense? How
can developers secured teams? How can we actively protect our
mobile apps and APIs against these really sophisticated AI threats.
It sounds like we need an entirely new playbook, not
just you know, patching a few holes.

Speaker 2 (12:22):
We absolutely do need a new playbook, or at least
a significantly updated one, and it really starts with embedding
security deeply into the entire development life cycle. That secure
by design principle, but turbocharged for AI. It can't be
an afterthought. This means the fundamentals are still crucial. Enforcing
hgtps ideally TLS one point three using strict least privileged

(12:43):
permissions for everything, versioning your APIs carefully with strong schema
enforcement to reject unexpected data, and of course robust authentication
and authorization things like oh two point zero open idconnect
using jawts, but perhaps with more dynamically scoped tokens that
limit what an AI can do even if it's token
is somehow compromised.

Speaker 1 (13:01):
We've always heard secure by design, but what does that
look like specifically in this AI driven mobile context? Doesn't mean,
for example, we need constant reauthentication that sounds like it
could harm the user experience.

Speaker 2 (13:11):
It's a balance, but in many high risk scenarios, yes,
we need to move away from the idea of login once,
stay trusted forever. Instead we implement continuous authentication mechanisms. The
Rocket Firm Studios report emphasizes establishing behavioral baselines for mobile apps.
This could mean logging things like typing, speed, tap pressure, scroll, velocity,
navigation patterns. If the real time activity significantly deviates from

(13:35):
a user's establish historical profile, then you trigger a step
up authentication like biometrics or an OTP, or maybe you
temporarily isolate the session until it's verified. It also means
using short lived access tokens, robust session fingerprinting that includes
device and network signals, and definitely reauthenticating on any significant
state change like trying to update sensitive profile information or

(13:58):
accessing a critical function.

Speaker 1 (14:00):
That's a much more dynamic of first in traditional sessions.
But how do you even detect this advanced AI traffic
that's designed to mimic humans so well? Can we still
rely on things like capped etcchas?

Speaker 2 (14:10):
Simple rate limiting and traditional cap ptcchas are becoming less
effective against sophisticated bots, especially AI driven ones. You need
to go further. We need behavioral anomaly detection. This involves
using mL models on the server side to analyze patterns
and event timing, interaction randomness. Things that betray non human activity.
AI generated traffic might show unnatural consistency or sometimes paradoxically,

(14:34):
too much randomness in the wrong places. Device and environment
fingerprinting are also key detecting headless browsers, emulators, virtual environments,
or known bot infrastructure. Adaptive challenge response mechanisms that present
harder challenges to suspicious traffic are useful too, and, as
Rocket Farm Studios suggests, leveraging server side mL models to
score requests in real time and flag or block suspicious

(14:57):
ones is increasingly necessary.

Speaker 1 (14:58):
So it's about spotting the those subtle, almost unconscious patterns
that betray a machine pretending to be human. What about
the broader infrastructure level. We hear a lot about zero
trust these days. How does that fit in?

Speaker 2 (15:08):
It fits in perfectly. Zero trust architecture is fundamental here.
The core principle, as you probably know, is never trust,
always verify. Assume any part of your infrastructure, user, device,
network service could be compromised. This translates to enforcing strong
identity and access management at every single layer, segmenting networks,
rigorously using MUTUALTLS MTLs for service to service communication to

(15:34):
ensure both ends are authenticated, and continuous monitoring of everything.
Policy as code tools like Open Policy Agent OPA can
help apply consistent security rules dynamically across this complex environment.
Rocket Farm Studios rightly points out that zero trust isn't
just a project you complete, It's an ongoing operating principle
you have to live by.

Speaker 1 (15:53):
Okay, And if your app or API is actually using
LLLMS itself, maybe for a chatbot feature or something similar,
what specific defenses their need there? Given those prompt injection
and manipulation.

Speaker 2 (16:01):
Risks we discussed right If you're integrating LMS directly, you
absolutely need LLM aware security filters and strict prompt hygiene.
This is critical. It means implementing context aware input sanitization
before data even reaches the LLM, building prompt filtering layers
designed to detect and block known malicious patterns or attempts
to inject harmful instructions. You also need output monitoring, checking

(16:25):
the LM's responses for signs of hallucination, data leakage, or
harmful content before it reaches the user or another system.
As the defending API's source mentions, you must defend against
prompt injection using techniques like impute output sanitization and clear
separation between instructions, user data, and back end commands within
the prompt itself. And don't forget data hygiene for the

(16:46):
agent's memory in any vector databases that uses regular checks
and cleaning are needed to mitigate those memory poisoning risks.

Speaker 1 (16:53):
Makes sense treat the LLM interaction point as a critical
security boundary. What about securing the execution of AI tools
since those can be manipulated?

Speaker 2 (17:01):
Yes, that's another crucial control point. The O Boss playbooks
offer good guidings here. You need to restrict how and
when AI agents can invoke tools. This means applying strict
access policies based on context, perhaps even requiring function level
authentication for sensitive tool actions. Running tool executions in sandboxed
environments to limit potential damage is also a very good practice,

(17:24):
and critically, you need detailed logging of all AI tool interactions.
This isn't just for auditing, it's for real time detection
of suspicious patterns. Like that command chaining we talked about,
where an agent tries to string together multiple tool calls
to circumvent security policies.

Speaker 1 (17:38):
It sounds like a lot to build and manage. Are
there specialized AI security tools emerging that can help teams
implement all this It feels like a massive undertaking from
many organizations.

Speaker 2 (17:48):
It is a significant undertaking, and yes, specialized tools are
definitely part of the solution. The defending API source mentions
employing dedicated API security platforms that incorporate real time anomaly
detection and behavior base blocking specifically tune for these kinds
of threats. And here's a fascinating idea they propose. Consider
deploying a good guy AI, maybe a smaller efficient LLM

(18:10):
designed to act as a security intermediary or monitor. Its
job would be to inspect the intense derived from user
prompts or agent plans before execution, flagging suspicious flows or
potential policy violations. It's essentially leveraging AI defensively to fight
offensive AI.

Speaker 1 (18:25):
Using AI to police AI. That's a proactive stance. How
do we then make sure all these defenses, the models,
the rules, the tools stay effective against such a rapidly
evolving threat landscape. Things change so.

Speaker 2 (18:36):
Fast, constant vigilance and adaptation, you have to continuously train
and refine your detection systems using synthetic attacks and rigorous
red teaming. Use AI itself to generate realistic synthetic attack
simulations that mimic the latest adversarial techniques. This helps improve
your security models without waiting for real attacks. Rocket Farm

(18:56):
Studios notes this effectively turns AI's offensive power into a
defensive advantage for you. Defending APIs also strongly emphasizes red
teaming your agentic integrations. Actively simulate memory poisoning, prompt injection attacks,
privileged misuse scenarios, try to break your own system to
find vulnerabilities before attackers do, and of course, continuously refresh

(19:18):
your threat models and risk assessments. It's not a one time.

Speaker 1 (19:20):
Thing, right, constantly challenging your own assumptions and defenses. What
about the human element, compliance, governance training? Those feel like
they could easily get lost in the shuffle with all
these complex technical challenges.

Speaker 2 (19:30):
They're absolutely critical and mustn't be overlooked. You need to
stay informed about and compliant with emerging AI security standards
and regulations. This includes things like ensuring transparent logging, providing
explainability for AI decisions where required, which could be tough,
and adhering to data minimization principles only giving the AI
access to the data it absolutely needs. Training your developers

(19:54):
and security teams specifically on agent specific vulnerabilities and secure
development practices for AI is non negotiable. Everyone needs to
understand these new risks, and finally, you must establish clear
agent life cycle governance. This means having defined approval flows
for deploying new agents or giving them new capabilities, using
isolated environments for testing and development, and implementing sensible human

(20:17):
in the loop checkpoints for particularly high risk actions.

Speaker 1 (20:21):
That governance piece sounds key to keeping things under control. Now,
speaking of comprehensive solutions and bring this down to practical tools,
how does our sponsor approve specifically fit into this layered
defense picture? For mobile apps? In their APIs.

Speaker 2 (20:33):
Approof plays a really crucial role, particularly at the edge,
securing the mobile app itself and its communication with the
back end APIs. They provide patented mobile app Attestation technology.
What this does is verify in real time that only genuine,
untampered versions of your mobile app running on safe non
compromised devices are allowed to talk to your APIs. This

(20:55):
is a powerful first line of defense. It effectively blocks
a huge amount of automated trees right at the source,
including sophisticated AI powered bots, modified apps, or attempts using
reverse engineered clients, because they simply can't pass the attestation check,
so it stops the.

Speaker 1 (21:09):
Fakes before they even get to make an API call.

Speaker 2 (21:12):
Essentially, yes, it verifies the integrity of the calling app
and its environment continuously. They offer this real time app
and device integrity verification, plus their dynamic API shielding allows
security policies to be updated over the air almost instantly,
which is vital for reacting quickly to newly discovered threats
without needing an app update. And another key aspect is

(21:34):
their Secure Credential Management Approve helps eliminate the need to
hard code API keys or other secrets directly within the
mobile app code, which is a common vulnerability exploited by
attackers including AI tools scanning for leaked credentials.

Speaker 1 (21:48):
Right getting those secrets out of the app binary is.

Speaker 2 (21:50):
Huge it is Approve manages these secrets securely in the
cloud and delivers short lived tokens directly to attested apps
when needed, making it easy to manage and rotate credentialstickly
reducing the risk of AI driven credential leaks. And they
provide this unified protection across iOS, Android and Harmonios, which
is important for covering the diverse mobile landscape. Developer's face, that.

Speaker 1 (22:12):
Really does sound like a comprehensive approach, focusing on securing
that crucial link between the mobile device and the back
end APIs against these modern threats. It's about protecting the
entire app ecosystem it is.

Speaker 2 (22:24):
And that layered approach is key. The core message here,
I think is that AI represents this incredibly powerful dual
use technology, a force for both offense and defense in cybersecurity.
Protecting our mobile apps and APIs in this new era
demands a sophisticated, multi layered defense strategy that constantly adapts
to these autonomous AI agents, And we have to acknowledge

(22:47):
that the built in platform protections, while helpful, often aren't
enough on their own against dedicated attackers, leaving sensitive data
potentially vulnerable.

Speaker 1 (22:56):
Yeah, it really drives home the point that this isn't
just about patching the envoln mobility anymore. It's about building
truly resilient systems, systems that can learn and evolve faster
than the threats themselves. The future of mobile security, it seems,
really depends on how effectively we can leverage AI defensively
to fight offensive AI, turning this huge challenge into maybe
our strongest ally exactly.

Speaker 2 (23:16):
We hope this discussion has given you, our listeners, some
crucial actionable insights to think about for fortifying your own
creations and deepening your understanding of these advanced cyber threats.
Definitely consider how these principles apply to your own development, work,
your security practices, or even just your interests in this space.
Security is absolutely an ongoing process, a journey, not a destination,

(23:37):
especially in this incredibly fast moving field.

Speaker 1 (23:40):
Well said, and remember this podcast was created using human
sources and expertise assisted by AI. Thank you for joining
us on upwardly Mobile API and AB security. Stay informed,
stay secure, and stay ahead.
Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.