Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Have you ever stopped to think about this, getting paid,
like actual money by huge companies like Facebook or Microsoft
just for finding bugs in their systems.
Speaker 2 (00:12):
Yeah, legally it sounds a bit like science fiction, doesn't it.
But it's totally real. There's this whole community out there,
ethical hackers, white hats, whatever you want to.
Speaker 1 (00:21):
Call them, and they're making serious money sometimes.
Speaker 2 (00:23):
Oh yeah, definitely. Some hunters pull in thousands, even tens
of thousands in a year, and they're playing a really
crucial role in making the Internet safer.
Speaker 1 (00:33):
Absolutely, so welcome everyone to our deep dive. Today. We're
plunging into the pretty fascinating world of bug bounty hunting.
Speaker 2 (00:40):
It's more than just finding glitches, isn't it?
Speaker 1 (00:42):
Way more? It's like being a digital detective, a force
for good and cybersecurity. Today we're digging into the key
ideas from Bug Bounty Hunting Essentials by Carlos Alazano and
Shamir Amir.
Speaker 2 (00:53):
Great book, really practical stuff in there.
Speaker 1 (00:56):
Yeah, and our mission here is to give you the shortcut,
the core knowledge for their work, so you walk away
knowing what this field is all about, whether you're just
curious or maybe even thinking about it as a path.
Speaker 2 (01:06):
So at its heart bug bounty hunting, it's really about
formalizing this whole process, right, It's a structured way to
find vulnerabilities flaws basically in applications, web apps, mobile apps, software.
Speaker 1 (01:20):
And companies actually pay for this. They offer rewards bounties exactly.
Speaker 2 (01:23):
They give bounties to hackers who find these problems and
crucially report them responsibly through the proper channels.
Speaker 1 (01:29):
Well wait, don't be companies already have huge security teams
doing this stuff.
Speaker 2 (01:33):
Internally they do, absolutely, but internal teams, well, they can
benefit massively from having outsiders look at things, you know,
real world hackers with different perspectives, like.
Speaker 1 (01:44):
An external audit, but way more dynamic kind of.
Speaker 2 (01:48):
Yeah. These are often called vulnerability rewards programs or vrps.
They usually manage through these special platforms, vulnerability coordination platforms.
It's essentially crowdsourcing. Security companies pay individual hackers for finding
specific bugs, leveraging this huge global pool of talent.
Speaker 1 (02:05):
It really sounds like a marketplace for security skills. So
where does this actually happen? Where are these hackers go
looking for work?
Speaker 2 (02:11):
The book highlights the big players. You've got hacker one
that was one of the first and it's still massive.
Then there's bug crowd Cobalt.
Speaker 1 (02:18):
Cobolt's the one known for PTS right penetration testing as
a service.
Speaker 2 (02:22):
That's the one, Yeah, PTS and Sinnak is another major platform.
They handle everything they're reporting, verifying the bugs, managing the payouts.
Speaker 1 (02:29):
Okay, so you sign up for one of these platforms.
Are all the hunting grounds the programs open to everyone?
Speaker 2 (02:35):
Ah, good question. There's a key difference. You have public programs,
which yeah, are generally open to anyone who signs up
on the platform. They list the rules, what's in scope,
what's out, the bounty levels, all public.
Speaker 1 (02:48):
And the alternative.
Speaker 2 (02:49):
Then you have private programs either invite only. You usually
need a proven track record, good stats on the platform,
maybe specific skills the company is looking for.
Speaker 1 (02:57):
So reputation matters a lot hugely.
Speaker 2 (03:01):
Private programs often focus on specific maybe newer parts of
an application, and they want experienced eyes on it, trying
to avoid a flood of low severity reports.
Speaker 1 (03:10):
So how do these platforms and companies actually measure a
hunter's reputation? What are they looking at?
Speaker 2 (03:16):
It's all about the stats. The book mentions three key
ones signal impact and accuracy.
Speaker 1 (03:22):
Okay, break those down.
Speaker 2 (03:23):
Signal signal is basically a measure of how valid your
reports are. Too many invalid or duplicate reports, your signal
goes down. It's like a noise filter for the companies.
Speaker 1 (03:34):
Makes sense. Impact that sounds like the severity pretty much.
Speaker 2 (03:37):
It reflects the average bounty amount you've been awarded per
valid report. Higher impact generally means you're finding more critical
vulnerabilities and accuracy. That's your hit rate, the percentage of
your reports that get accepted, divided by the total number
you submit. It so like ninety one percent accuracy. That
tells a company you're consistently finding real valid issues.
Speaker 1 (03:58):
Right, So that reputation system is key for trust. Okay,
let's shift hears Someone listening might be thinking this sounds cool,
But where do I even start. Do you absolutely need
a string of security SERTs or a fancy degree.
Speaker 2 (04:10):
That's one of the biggest myths the book tackles head on.
You don't need formal certifications or a specific degree. Age
isn't really a factor either.
Speaker 1 (04:18):
Really, that's quite empowering it is.
Speaker 2 (04:21):
But and this is a big butt, you absolutely do
need a deep understanding of how applications are built, how
they work, and where the common security weaknesses lie.
Speaker 1 (04:32):
So less about the paper, more about the practical knowledge exactly.
Speaker 2 (04:35):
The real starting point the kickstart is learning web and
mobile application technologies inside and out.
Speaker 1 (04:42):
Okay, So if degrees aren't the gatekeepers, what's the actual roadmap?
How does someone build that practical knowledge?
Speaker 2 (04:48):
It's a hands on journey, The book suggests, starting by reading.
Get good books on website hacking, then maybe mobile hacking.
Focus on areas that genuinely interest you.
Speaker 1 (04:57):
Then what just reading isn't enough? Right?
Speaker 2 (04:59):
Definitely? Step two is practice. Set up virtual environments with
deliberately vulnerable applications. There are loads available and test the
techniques you're reading about. Break things safely?
Speaker 1 (05:10):
Okay, read practice? What else?
Speaker 2 (05:12):
Read reports, look at the proof of concepts, the polsies
that other successful hunters have published. The security community is
surprisingly open. Blogs like hacker one's own or famous researchers
like Franz Rosen. They share a ton, learning from the
winds of others precisely, and finally connect with people, network,
join online communities, maybe even team up with other learners,
(05:34):
bouncing ideas off others that can really accelerate.
Speaker 1 (05:37):
Things that sounds like a solid plan. Beyond the technical
learning curve, What about mindset? Are there specific rules or
ways of thinking that help people succeed here?
Speaker 2 (05:47):
Oh? Absolutely, mindset is huge. The book has some crucial pointers.
First off, start small, seriously, don't try to hack Google
or Microsoft on your first.
Speaker 1 (05:56):
Day out, because they're like fortresses.
Speaker 2 (05:58):
Exactly, they have armies of security people. Instead, look for programs,
maybe smaller ones or parts of bigger programs, that get
less attention. Find those bounties that go ignored. As the
book puts.
Speaker 1 (06:09):
It, find the path less traveled makes sense. What else?
Speaker 2 (06:13):
Approach with clarity before you even start poking around. Really
understand what the application is supposed to do, What are
its features, what can different users do? Check the documentation
if it's available, Know your target right, and keep expectations low,
especially at first. Don't go in thinking you'll find a
critical bug where thousands in your first week. Report what
(06:35):
you find, learn from it, move on. Develop a mindset
of just hunting bugs, not hunting bugs in a matter
of hours.
Speaker 1 (06:43):
Patients and persistence again totally.
Speaker 2 (06:45):
Yeah. Also, really learn the vulnerabilities, understand why they happen
in the code before you try exploiting them and stay
up to date. Things change constantly, new frameworks, new attack technique.
Speaker 1 (06:56):
It's a continuous learning game.
Speaker 2 (06:57):
Always and remember, even finding something that doesn't qualify for
a bounty, that's still valuable experience. You learn something, and finally,
think about chaining vulnerabilities.
Speaker 1 (07:06):
Combining smaller issues.
Speaker 2 (07:07):
Yes, sometimes two or three low impact bugs combined can
create a critical risk. Look for the biggest overall impact,
not just isolated flaws.
Speaker 1 (07:16):
Okay, so you've put in the work, you've learned, you've practiced,
and bam, you've found a valid bug. Now what the
book calls report writing and art? Why is the report
itself so important?
Speaker 2 (07:29):
Because that report that's your communication channel, it's your your
calling card. Really, a good report gets you noticed how
so well, It leads to faster responses from the security team,
It builds your reputation on the platform, helps you build
relationships with the program owners, and yeah, it often leads
to better payouts. It shows your professional.
Speaker 1 (07:49):
So it's not just what you find, but how you
present it.
Speaker 2 (07:51):
Absolutely. Yeah, But before you even start writing that report,
there's homework to do on the program itself.
Speaker 1 (07:57):
Right, understanding the rules of engagement, What do you need
to know.
Speaker 2 (08:00):
You need to read their program policy carefully. What's their mission,
Which specific services or websites are actually in scope, and
just as important, what's out of scope.
Speaker 1 (08:10):
Don't want to accidentally test something you shouldn't be touching.
Speaker 2 (08:13):
Definitely not that can get you kicked out or worse.
You also need to check their reward structure. They usually
have tables showing bounty ranges for different vulnerability types like critical, high, medium, low.
Speaker 1 (08:25):
What else is in the policy.
Speaker 2 (08:27):
Eligibility rules like age or location restrictions, connect guidelines, what
not to do like public disclosure before fixing, accessing other
users data, physical attacks, social engineering, the no nos okay,
And often a list of non qualifying vulnerabilities stuff they
already know about or don't consider severe enough, like self
(08:48):
exss sometimes or missing security headers on nonsensitive pages. Knowing
this saves you time.
Speaker 1 (08:53):
Good point, save you writing a report they'll just close instantly.
Speaker 2 (08:57):
Exactly and finally, look at their commitment to research matures,
how quickly they aim to acknowledge reports, investigate and fix things.
Speaker 1 (09:03):
It sets expectations, all right, groundwork done. Now the report itself?
What makes it good? Beyond just the technical details?
Speaker 2 (09:09):
Clarity is key. It needs to be easy to follow,
even if the person reading it first isn't deeply technical. Depth. Yes,
focus on the technicals, but avoid bragging or being arrogant,
and be respectful. Always be respectful to the vendor team.
You're trying to build a positive working relationship. It pays
off in the long run.
Speaker 1 (09:27):
So what's the structure? The blueprint for a solid report?
Speaker 2 (09:32):
Pretty standard format, usually a clear descriptive title, a description
section giving context what the bug is, where you found it.
Then the crucial part, the proof of concept or PAYC,
step by step instructions so they can replicate exactly what
you did. Screenshots, videos highly recommend it visuals make it
so much easier for them to see the issue. After
(09:54):
the PAYC, you need an impact section.
Speaker 1 (09:56):
This is where you explain why it matters.
Speaker 2 (09:58):
Exactly what are the real world consequences data theft account takeover.
This helps them understand the severity and justifies the bounty
and finally remediation.
Speaker 1 (10:09):
Suggesting fixes.
Speaker 2 (10:10):
Yeah, offer specific suggestions if you can, maybe point to resources.
Don't just say fix the bug. Show you've thought about
the solution too, and.
Speaker 1 (10:17):
What if the team comes back with questions after you submit.
Speaker 2 (10:19):
Be prompt, be polite, be thorough with your answers, stick
to the technical facts. The book advice is waiting to
ask about the bounty until after the issue is confirmed
or resolved.
Speaker 1 (10:29):
Good tip. And if they reject it.
Speaker 2 (10:31):
Accept it gracefully. But if you genuinely believe they've misunderstood
something critical, you can respectfully explain your reasoning again providing
more detail, but don't argue endlessly.
Speaker 1 (10:42):
Okay, let's pivot to some of the actual bugs hunters
look for. The book dies into several classics. SEQL injection
or slukewile always seems to be top of the list,
like the oas B Top ten. What's the core idea.
Speaker 2 (10:55):
Right, Swile's a big one. Essentially, it's tricking the application
into running malicious sequel code by sneaking it into user
input fields like a search box or a log inform.
It works when the application doesn't properly clean or validate
that input before using it in a database.
Speaker 1 (11:10):
Query, and the impact can be huge, oh critical.
Speaker 2 (11:12):
You can potentially bypass logins, dump entire user databases, usernames, passwords,
credit card info sometimes or even modify or delete data.
It's really bad news for the company.
Speaker 1 (11:23):
If found the book mentions a really interesting uber example
not in a log inform.
Speaker 2 (11:28):
Though, Yeah, that was wild. A four thousand dollars bounty
for sequel found by Orange SI in an unsubscribed link
in an advertising email.
Speaker 1 (11:36):
And unsubscribed links.
Speaker 2 (11:37):
Seriously, seriously, he found a time based blind sequel. He
injected a command like sleep into a parameter in the link.
User read. I think it was if the page took
twelve seconds longer to load. He knew the database executed
his command. He used that to figure out datase names
and users.
Speaker 1 (11:54):
Wow, what's the lesson there?
Speaker 2 (11:56):
The critical bugs can lurk in the most unexpected places.
Don't just test the obvious login forms. Test everything, even
email links, background processes, everywhere.
Speaker 1 (12:05):
Incredible. Okay, so segle hits the database. What about attacks
that target the user's logged in session that brings us
to CSRF Cross Site Request forgery right?
Speaker 2 (12:14):
CSRS is sneaky. It basically tricks a logged in user's
browser into making a request to a website they trust,
but the request actually performs an action the attacker wants,
not the user.
Speaker 1 (12:23):
How does it even work?
Speaker 2 (12:25):
It abuses the trust relationship your browser stores session cookies
for sites you're logged into. If you visit a malicious site,
it might contain hidden code, maybe an image tag with
a weird URL, or a hidden form that submits automatically
that sends a request to say, your bank site using
your existing session cookie.
Speaker 1 (12:44):
So the bank thinks you made the request exactly.
Speaker 2 (12:46):
It could be used to transfer funds, change your email address,
to delete your account, anything you can normally do wle
logged in. The classic example used to be like clicking
a link on a forum that secretly makes your Facebook
profile post spam nasty stuff.
Speaker 1 (13:00):
How do attackers usually deliver the payload?
Speaker 2 (13:02):
Common ways are get CSRF, maybe hiding the malicious URL
in an MG tag, or POSTDSRF using a hidden HTML
form with JavaScript dismitted automatically when you load the page.
Speaker 1 (13:14):
And the book had a serious example from Badu.
Speaker 2 (13:17):
Yeah, that was a critical one. Full account takeover was
possible by exploiting a CSRF vulnerability to add a recovery
email address to the victim's account. The twist was how
they bypassed the usual protection.
Speaker 1 (13:28):
The anti CSRF token right.
Speaker 2 (13:30):
Usually forms include a unique hidden token to prevent CSRF,
but in this Bidue case, the researcher found the token
wasn't properly protected. It was actually accessible within a separate
JavaScript file, so they could grab the token and then
forge their request. Choose.
Speaker 1 (13:46):
You need to protect those tokens properly too, okay. Next up,
Cross site scripting XSS, another perennial favorite on the OAS
Top ten YEP.
Speaker 2 (13:55):
XSS is everywhere. It's another input validation failure, but this
time the attacker in ject's malicious JavaScript or other client
side script into a web page, which then gets executed
in the browser of other users who view.
Speaker 1 (14:07):
That page, so it attacks the user directly via their browser.
Speaker 2 (14:10):
Pretty much. It often requires some user interaction, like clicking
a link or just loading a compromised page.
Speaker 1 (14:16):
What are the main flavors of XSS?
Speaker 2 (14:18):
The book outlines the main three. You've got reflected XSS,
which is kind of immediate. The malicious script is in
the URL or input, the server reflects it back, and
it runs in the user's browser right then, like a
malicious link in an email, affects only the person who.
Speaker 1 (14:32):
Clicks okay, one off. What else?
Speaker 2 (14:34):
Then there's stored XSS, which is often more dangerous. The
malicious script gets permanently saved on the server, maybe in
a comment thread, a user profile, a product.
Speaker 1 (14:44):
Review, so anyone who views that page gets.
Speaker 2 (14:46):
Hit potentially yes. Every time that infected data is displayed,
the script runs. The book mentions a funny sort of example,
a QA tester entering scripteller script into a field crashes
a market app with pop ups later that stored XSS
and the third type DOM based EXSS. This one's tricky
because the vulnerability exists entirely in the client side code,
(15:09):
in the browser's document object model DOM. The server might
not even see the malicious script, but the JavaScript on
the page takes input, maybe from the ural fragment like hashtag,
and handles it insecurely, allowing the script to.
Speaker 1 (15:20):
Run right, And the book mentions a massive ten thousand
dollars payout for a Yahoo Mail stored XSS. What was
special there?
Speaker 2 (15:27):
That was a really clever one. In the Yahoo Mail editor,
when you attached a file, it generated some htmil automatically,
a parameter within that HTML data EARL wasn't properly sanitized. Yeah,
by crafting a special email fragment, an attacker could inject
JavaScript into that data EARL and would execute for anyone
viewing the email high impact because it's in the mail client.
(15:48):
Great explanation in the report Big Bounty showed how complex
interactions can hide EXSS.
Speaker 1 (15:55):
Okay, moving beyond these more classic vulnerability types, the book
talks about app implication logic vulnerabilities. These sound different, they
really are.
Speaker 2 (16:04):
Unlike SQL or XSS, where you're often looking for specific
code patterns or lack of sanitization, logic flaws are about
breaking the intended process or rules of the application. How So,
developers build an application with a certain workflow in mind,
a specific paradigm, as the book calls it. Logic flaws
happen when a user does something unexpected that the developer
(16:24):
didn't anticipate bypassing controls or achieving an unintended outcome. Automated
scanners usually miss these completely.
Speaker 1 (16:31):
Because they're not looking for does this make sense? They're
looking for does this match a known bad pattern?
Speaker 2 (16:37):
Exactly? Finding logic flaws requires you to really understand the
business logic, the user journey. You map out how things
should work and then try to subvert it. Look at forms,
API calls, processes involving email or SMS, anywhere state changes
or assumptions are made.
Speaker 1 (16:55):
Can you give an example. The Starbucks one sounded intriguing.
Speaker 2 (16:58):
That was a great example of thinking out side the box.
A race condition the hunter bought multiple gift cards, initiated
transfers between them, but then use command line tools like
CURL to send simultaneous request really fast.
Speaker 1 (17:11):
What did that do?
Speaker 2 (17:12):
It basically confused the applications process for finalizing the transfer
and updating balances. By hitting it rapidly, they prevented the
session from clearing correctly, effectively tricking the system into giving
them free credit before the balance is properly updated.
Speaker 1 (17:25):
Because the developer assumed someone would just use a slow
human operated browser.
Speaker 2 (17:29):
Percisely, they didn't account for programmatic high speed interaction by
passing the expected sequence of operations. Logitflaws often exploit these
kinds of assumptions.
Speaker 1 (17:38):
Very clever. What about subdomain takeovers sounds like digital squatting?
Speaker 2 (17:42):
It kind of is. It's a configuration mistake. Imagine a
company sets up a subdomain like blog, dot company, dot
com and points its DNS record maybe a CNAM record
to a third party service like Heroku or GitHub pages. Okay,
now what if they stop using that service or delete
their account there? But forget to remove the DNS record.
Speaker 1 (18:02):
Ah, so the CNA man still points to a service,
but it's now unclaimed exactly.
Speaker 2 (18:07):
An attacker can then go to that third party service,
claim that specific host name blog dot company, dot com,
and suddenly they control the content served on that official
looking subdomain.
Speaker 1 (18:16):
Oof. What's the danger there?
Speaker 2 (18:18):
It's critical they get host phishing pages, steal session cookies
for the main domain if cookies are scoped improperly bypass
security policies like CSP, intercept emails if it's an MX record,
takeover loads of bad stuff. Uber and Starbucks both had
instances mentioned where subdomains pointed to unclaimed cloud resources, easy
points for attackers if not monitored.
Speaker 1 (18:39):
A reminder to clean up your digital loose ends. Okay.
Next vulnerability xxse XML external entity XML feels a bit
old school. Is this still a thing?
Speaker 2 (18:47):
Oh? Definitely, Lots of systems still process XML, especially in
back end integrations or file uploads. Xx happens when an
application parses XML input from a user, and that XML
contains references to extraternal resources external.
Speaker 1 (19:01):
Entities, and the parser just fetches them.
Speaker 2 (19:03):
If it's poorly configured, Yes, an attacker my craft XML
like in its oofs system file dot it's CP PASSWD
and then a reference and oaths later. If the parser
allows external entities, it might actually read these ecopasswood file
from the server and include its contents in the response.
Speaker 1 (19:19):
Yikes, so it can read local files.
Speaker 2 (19:21):
It can read local files, make network requests from the
server's perspective, acting like a proxy, or even cause denial
of service by making the parser consume huge amounts of resources.
A billion lass attack.
Speaker 1 (19:32):
What's a Surprising place XXC has shown up.
Speaker 2 (19:34):
The book mentions a Facebook case involving a dot docks file.
Speaker 1 (19:37):
Upload a word document, how is that XML?
Speaker 2 (19:41):
Modern office documents dot dox, dot xlsx, et cetera are
actually zip archives containing multiple XML files. By embedding a
malicious DTD document type definition referencing an external entity within
one of those XML files inside the dot dos, an
attacker could trigger xx E when Facebook process the uploaded document.
(20:03):
Show that even seemingly benign file uploads can be vectors.
Speaker 1 (20:06):
Wow. Okay. Last one in the section template injection, specifically
server side template injection SSTI. Sounds complex, It can be, and.
Speaker 2 (20:15):
The impact is often severe. Many web frameworks use template
engines like GINGA two, Python free marker Java twig php
to dynamically generate HTML pages by embedding data into templates.
Speaker 1 (20:27):
Right like inserting the username into welcome.
Speaker 2 (20:29):
Yeah, exactly, the sesdi happens when user supplied input gets
embedded directly into the template itself without proper sanitization, rather
than just being treated as data within the template.
Speaker 1 (20:38):
What's the difference?
Speaker 2 (20:39):
If the user input is treated as part of the
template code, the template engine might execute it, so instead
of just displaying the user's input, it interprets commands within it,
leading to best case maybe cross site scripting, worst case
full remote code execution RCE on the server. Because template
engines often have access to powerful back end objects and.
Speaker 1 (20:58):
Functions, how would you even spot that.
Speaker 2 (21:00):
You try injecting characters or sequences that the template engine
uses for its syntax. A common test payload is something
like if the application responds with forty nine instead of
just echoing, you know, the template engine evaluated it.
Speaker 1 (21:11):
And the book had an Uber example for this too.
Speaker 2 (21:13):
Yeah, jingitwo ssti uh hunter put seven foot seven, which
in Python jingitw repeats the string seven seven seven seven
times into a name field on writer dot uber dot com.
An email sent by the system then contained seven seven
seven seven seven seven seven in the body, confirming the
injection and evaluation. That opened the door to extracting more
info from the server environment.
Speaker 1 (21:34):
Fascinating stuff. Okay, we've covered a lot of ground on vulnerabilities.
What about the tools of the trade. What's in a
bug bounty Hunter's digital.
Speaker 2 (21:42):
Backpack tools are definitely crucial assistance. You absolutely need an
HTTP proxy. BURP Suite is kind of the industry standard.
It lets you intercept, inspect, and modify all the traffic
between your browser and the target application. Zap Ze Attack
Proxy from OAS is a great free alternative.
Speaker 1 (22:01):
So you can see exactly what's being sent and received precisely.
Speaker 2 (22:05):
Then maybe network analyzers like wire Shark for looking at
raw network packets, especially if non standard ports are involved.
For scanning, you've got things like squall map, which is
amazing for automating SQL injection, detection and exploitation.
Speaker 1 (22:17):
What about finding targets or mapping them out?
Speaker 2 (22:20):
End map is essential for ports scanning and service discovery.
For broader reconnaissance, Showdian is like a search engine for
Internet connected devices. Tools like reconning help automate finding subdomains
and related infrastructure and simple browser extensions for managing proxies
or cookies are super helpful to.
Speaker 1 (22:37):
It's quite the arsenal. But tools alone aren't enough right
this field change is so fast? How important is continuous learning?
Speaker 2 (22:44):
It's absolutely paramount. You cannot rest on what you knew
last year or even last month.
Speaker 1 (22:49):
Sometimes, so how do people stay sharp?
Speaker 2 (22:51):
Manyways? Formal training and certifications can be valuable. The book
mentions gisserts like GPN or dwopped for web apps and
Ends of Securities OSCP or OSWA are highly respected for
their practical, hands on.
Speaker 1 (23:05):
Approach beyond formal searts.
Speaker 2 (23:07):
Reading always reading classic books like The Web Application Hacker's
Handbook are foundational. Engaging and capture the flag CTF competitions
and playing on vulnerable practice platforms like Hack the Box
or DVWA. Damn Vulnerable Web Application is amazing hands.
Speaker 1 (23:23):
On practice learning by doing essentially exactly.
Speaker 2 (23:26):
Plus follow blogs. Port Twigger, the makers of Burp, has
a great one. Watch YouTube channels dedicated to hacking and security,
and participate in the community. Go to conferences like Defcon
or black Hat. If you can join local OSS chapter meetings,
connect online. It's an ecosystem you need to be part of.
Speaker 1 (23:42):
That makes total sense. It's a journey, not a destination.
Speaker 2 (23:45):
Definitely.
Speaker 1 (23:46):
Well, we have certainly covered a lot today. We've journeyed
through the world of bug bounty hunting, figuring out what
it is, the platforms like hacker one and bug crowd,
why reputation metrics like signal and impact matter, yeah, and the.
Speaker 2 (23:57):
Path to getting started, emphasizing learn and practice over formal SERTs,
the importance of starting small and that crucial mindset.
Speaker 1 (24:06):
Then we dug into the art of the report, why
clarity and proof of concept are so vital, and of
course the vulnerabilities themselves.
Speaker 2 (24:12):
SIEGWI in surprising places like unsubscribed links CSRF bypassing tokens
hidden in JS files, the different flavors of XSS and
that big Yahoo.
Speaker 1 (24:22):
Male payout, the cleverness needed for logic flaws like that
Starbucks race condition, the risks of forgetting dns with subdomain takeovers,
finding xxx and word docs, and the power of server
side template injection.
Speaker 2 (24:34):
Plus the tools like Burke Suite and school Map, and
the absolute necessity of continuous learning through books CTFs in
the community.
Speaker 1 (24:42):
It really paints a picture of a hidden digital landscape,
doesn't It where tiny oversights can cascade into major problems,
but also where sharp eyes and ethical reporting are genuinely rewarded.
Speaker 2 (24:52):
It's a constant cat and mouse game, and these hunters
are on the front lines.
Speaker 1 (24:55):
So a final thought for everyone listening. As you go
about your day using app, browsing websites, think about those
hidden paths. What assumptions are being made, what processes could
be subverted? How might knowing about these potential vulnerabilities change
how you interact with technology every single day.