Consider the case where you know either the software or the system which is the target. Consider also the case where your goal is "get a reliably exploitable vulnerability while coming in under budget" as opposed to e.g. "enumerate a sufficiently broad swathe of vulnerabilities such that a stakeholder is pleased with your diligence." You will probably prioritize vulnerability classes which, in your experience and/or that of the industry, are numerous and high-severity. You will probably also prioritize "the joints" of the system, because (if you've done software development professionally) you know that handoffs between scripts, teams, servers, processes, etc etc are never as well-implemented or well-tested as something which is deeply within a particular set of borders.
Thomas has posted a list of features which are high-probability candidates for game-over vulnerabilities several times on HN; see second half of this comment: https://news.ycombinator.com/item?id=7936921
Research prioritization in a wide open space is research prioritization in a wide open space, and is broadly a hard problem. Broadly speaking you look for leverage: how does the set of things you are capable of causing get to a disproportionate (potential) impact? If I were hypothetically in Google Zero, I'd be looking primarily for widely deployed software which sits in poorly-understood places, operates at least part of the time on user-supplied data, and is colocated with terrifyingly sensitive systems. Bonus points if that software is boring and so hasn't had anyone take a serious look at it in a while, like e.g. png rendering libraries or request parsing libraries.
I personally use tools that report all the activity that a software produces, most of that is TCP or UDP connections. Among the tools that I use are WireShark, Burp Suite and mitmproxy. I also keep myself updated with techniques used by other researchers in other areas by collaborating on different forums and networking events. I also have several hundred honeypots distributed around the planet powered by OSSEC to collect and analyze bad traffic.
Less than 3 years ago I switched to do vulnerability research on mobile and desktop software and a whole new world opened in front of me. Debugging HTTP connections is one of the most common tasks and there are plenty of tools available out there, I have a small set of tools to do network sniffing and analyzing. Going deeper into the software, I frequently do black-box penetration testing (which basically means, I don't have access to the source code) and so tools like IDA, Hopper Disassembler, Binary Ninja, Cutter are first hand in my toolset.
> How do people there [Google Zero] decide upon a course of research?
• Someone tips you some information about suspicious activity,
• You are using the software as a regular user and notice something weird,
• You are interested to know how a software works and diving into it reveals secrets,
• One of your honeypots and/or network sniffers alerts you about unwanted connections,
• The author of that software requests you to do some penetration testing for an audit,
• Someone found a small problem in the software and you dive into to try to find more,
• And more commonly, you are bored and want to pass the time doing something more boring :D
Am I wrong? I'm not in the field, so I don't really know. I have lots of questions. Is it common for security consultancies to do only white box reviews or this wouldn't be a good decision business-wise? Is it common to charge for fixes to found vulnerabilities during an audit? What if the flaw is in open source library?
Usually the test will be done at a fixed price, with a fixed scope (What they are/aren't allowed to test). The result of this will usually be a report detailing the vulns, along with reccommended fixes/remediations and sometimes a 'post-fix test' to check if the company has successfully remediated the issues.
White box testing tends to look at the system/application from an internal-looking out perspective, whereas black box is an outside-in view. Benefits to whitebox being a very thorough assesment of the system but this will be time-consuming and expensive. Blackbox on the otherhand can simulate the likely attacks from an adversarie and sometimes be relatively quick dependent on the systems attack surface.
Hope this helps.
As for charging, I've never been paid based on findings, but based on time. If they fix while I'm testing that is great work by their team, but I'd prefer a stable test env so it's a bit annoying.
Libraries can be an interesting area, I focus my testing on code the client controls and only note known vulns in libraries they use. I have found issues in libraries before and we report it to the client, and work with them to disclose to the vendor if they want.
OWASP also is a great baseline to start recon.
Buto to add on more, most of the time it's because of misapplication or something not following good practice and knowing this is only possible by being in the field for a while.
Likewise, once a technique has been sucessfully used to exploit one piece of software, there's a lot of milage in just trying that technique against everything else.
There's also the "try everything" option of fuzzing; the default tool for this is "afl-fuzz", which is automated once you've set up the target in a suitable configuration.
Generally there are three strategies:
- try to get some executable machine code in from outside and run it (buffer overrun, use-after-free etc)
- look at the set of files and data considered "trusted" and put something untrustworthy in there (XSS, DLL injection, /tmp exploits)
- attack the hardware (JTAG, power analysis, key exfiltration)
I mostly do cryptographic engineering, so when hunting for issues, I search for things that are usually problematic. For example, searching for something like "XOR encrypt" and you might find someone doing something they shouldn't.
You can also try to find problematic implementations of standards by searching for those standards and trying to find comments or similar code. You might find some interesting stuff by searching "ECIES" or "NIST SP 800".
If your goal is to begin research, typically you'd find a problem, exploit technique, or vulnerability class that interests you. Then you start looking for places where you might be able to see how people defend against it (if at all). This is when you start finding issues pretty quick since you develop some sort of custom heuristics on code you examine.
Best tip from me would be to get to know some standards and see if they are being implemented correctly.
Eventually you'll find something if you're auditing a product, because you'll start at the application interface layer and work your way down.
No issues with the design of the application (this is end-game 50-75% of the time)?
OK, what about the libraries you've used.
OK, what about the framework you've built on.
OK, what about the web server you're running.
OK, what about other services on the web server you're running.
OK, what about the operating system you're running.
OK, what about the people who administrate the services you're running (this is usually end-game 98% of the time - it's the "auto-win" card if it's in-scope).
And all between the above, you can leverage different holes you found to find more holes in the previous and future steps you've taken.
Industry knowledge and following trends is useful. Following CVEs reveal problem areas in software. Some industries or entities may not devote much time to security review, leading to buggy code. Some see security only as an expense unfortunately.
Looking for vulns in locations where others have not or are unlikely to look, due to effort or domain knowledge requirements, can be very fruitful.
Directed fuzzing can yield great results. Any sort of parser in a lower language like c or c++ are good targets. Spend manual review time for areas that are unlikely to be reached by the fuzzer. Keep in mind fuzzers aren’t a silver bullet though, and won’t catch everything.
Running static analayzers or grepping for common errors can find quick hits often.
Complex specifications often have many errors when implemented. I’ve heard a few stories of RCE vulns due to buggy X.509 parsers.
Developing a threat model is helpful to find high impact vulns.
Knowledge is also key. Understanding components at the unit and integration level is a must.
After doing security reviews for a while, you develop an intuition of where to look. Every once in a while though, you bump into a SQL injection on a login page, so don’t overlook the simple things.
For example, if there's a bug in libfoo's ASN.1 structure parsing, then chances are that any implementation of the same structure parsing is going to have similar or identical bugs. It might not be the same field, but this certainly tends to do well as a strategy for finding bugs in libraries, file format bugs and complex network services.
I can't speak for Google Zero, but from the people I know there, they tend to look at a broad area of interest, research it painstakingly and then drill down deep while the bugs drop out. A good example of this is James Forshaw's work on Windows kernel bugs, which started as looking into the Windows file structure and alternate data streams and has slowly morphed over time into walking through Windows' local attack surface.
Again, people I know who have spent far too much time looking for bugs in specific pieces of software tend to take the deep dive approach as it yields more interesting bugs. The broad at-scale reimplementation approach finds bugs, but they're not as interesting.
"Throw a rock, you gonna hit something"
--- Ted Unangst, about OpenSSL
I guess it's true for most Software. 1 bug per 1000 LoC is pretty standard.Even nowadays, I regularly see people leave their systems open to these super basic input validation vulnerabilities because they only think about doing things right on the surface area, but then they'll have some batch process that analyses log files as a one-off script that is vulnerable if the user has a malicious http header or something like that.
Another way would be to try and think how a particular thing was written and figure out ways you can break it. I found plenty of buffer overflow vulns in custom TCP servers this way, but you can also find less serious things that let you do things you're not supposed to.
For example, an ecommerce business that would let you add an optional service charge allowed negative numbers (to deduct money from the order).
Another online shop had test item ids with negative prices in the database.
During recon, if you can find what tech is being used you can see if it's outdated at all or where vulnerabilities were found in the past. If you're doing penetration testing/vulnerability assessment you're not inventing new exploits, just using what's already out there and tweaking it. Research on new exploits is more rare as a job I think.
See [0] for the steps of pentesting and OWASP [1] for everything regarding security.
[0] https://www.cybrary.it/2015/05/summarizing-the-five-phases-o...
[1] https://www.owasp.org/index.php/Main_Page
Also there is a big security community on twitter where you can see researchers tweet about a lot of the stuff they're working on right now.
It is really a hard question, because it depends on several variables. I would say that the most important is the information gathering, if you would like to find a vulnerability in a system. There are several cases when you know that a vulnerability is present just by checking the version numbers. If you would like to find vulnerabilities in bigger systems, you should always search for older unmanaged functions that might be present. This is how a security researcher managed to find a critical vulnerability in a Google service that was used by probably nobody. If I am given the task to search for vulnerabilities in a standalone system, I always search for the functions that are not crucial for the system to work, because they tend to be less tested. If you have the possibility to upload files, then you could find a vulnerability with almost 100 percent certainty. So I would recommend you to spend a significant amount of time testing it. An other good indicator for potential vulnerabilities is when the user input is reflected in any manner. If you have access to the list of the used components, always check, if they have any known vulnerabilities this could be a really handful input, if the vulnerable features of that component are used by the system. If you can turn on options that make the system act in a different way, then you should always test them, because most of the automated scanners are going to miss those vulnerabilities that are only present in certain cases.
One of the best heuristics I had when approaching a system for the first time was to start poking at non-core features that were probably bolted on late. Things like management portals whether web or console, user customization settings, anywhere arbitrary files can be fed into the system. Those areas usually very fruitful and what I learned there helped understand and contextualize later research and discoveries in the core components.
Another is to focus on components that are of high impact because they are used everywhere: standard UNIX tools, compilers, shells, OpenSSL and friends, BIOS, CPUs, common network controllers, disk firmware etc. and analyze them for anything you can think of, run a fuzzer on them etc.
If you look at public penetration testing reports [3] seeing there mostly is no section about methodology, it's reasonable to assume that there are rather no true common standards or bodies of knowledge to find security vulnerabilities.
For some application security fields like web application security there are at least some semi-rigorous catalogs [1,2] which can help you to conduct more comprehensive code audits or security tests/audits.
As already mentioned there are already tools which can help you to conduct more professional and thorough code audits through static security source code analyzers or dynamic analysis tools (e. g. valgrind for memory related bugs or afl as an fuzzing tool example). These tools are focusing on implementation bugs, design weaknesses still have to be evaluated manually.
In my opinion the discipline of software security assessments hasn't grown up yet, but there is definitely research going on to improve the situation, e. g. [4] for a research example on finding bugs statically.
[1] OWASP Testing Guide v4: https://www.owasp.org/index.php/OWASP_Testing_Project
[2] OWASP Application Security Verification Standard (ASVS): https://www.owasp.org/index.php/Category:OWASP_Application_S...
[3] https://github.com/juliocesarfort/public-pentesting-reports
[4] Modeling and Discovering Vulnerabilities with Code Property Graphs: https://www.sec.cs.tu-bs.de/pubs/2014-ieeesp.pdf
Or maybe the individuals focus on a specific theme or pattern.
They can often use the experience if they follow a similar pattern.
For example someone might focus on password manager applications.
If they find one family of weaknesses they can often go test other similar applications to see if they have made similar mistakes.
That's why it is always suggested to start from point zero to be a solid security researcher.
Give me six hours to chop down a tree and I will spend the first four sharpening the axe. Abraham Lincoln