I sometime feel that our industry misses the point on what we (security professionals) are doing here.
In a nutshell, the current 'Web Application Security assessment' world is far from being ‘working’ (see AppScan 2011 for a fictitious story about what (from a technology point of view) these engagements should deliver)
Security Engagements (namely Web Application ones) should not be seen as a games of ‘cat & mouse’ where the ‘ethical attacker’ is trying to break the system!!! (and ultimately prove the client that they (the security consultants) are any good)
My view is that security engagements are ‘knowledge transfer exercises’ where people with specific knowledge in one area (Web Application Security) are helping as much as they can, the people who don’t (Managers, Software Architects, Developers, Clients, etc...), during the short time period that they are involved with the application (i.e. the ‘security engagement period’)
The ultimate goal is Risk Reduction with the “Owners, Builders, Buyers & Users” of the target applications being able to make knowledgeable decisions about the security profile of their application (this is what we at OWASP call ‘visibility’).
To play a 'game' where these experts (i.e. the Security Consultants) are NOT provided AS MUCH INFORMATION AND SUPPORT AS POSSIBLE during their engagements is frankly: inefficient, unproductive and expensive.
Now talking directly to my peers (the security consultants), regardless of the type of test that you are doing, black-box or white-box (and the time allocated to it), sorry, but you are NOT doing a good job for your clients if:
- you don’t have access to the source code
- you don’t have access to a live instance of the application
- you don’t write unit tests for your results
- you don’t understand the client's business model
- you are not writing WAF rules or patching the app
- you are not giving the developers ‘auto code fixers’
And here is the bottom line; The measurement of our success should NOT be how many vulnerabilities were DISCOVERED, but how many vulnerabilities were FIXED (or MITIGATED) by the client.
We will be doing our job If we are able to implement workflows that allow developers to easily & quickly fix, deploy and test the reported vulnerabilities.
The rest of this post will look at each of these 6 requirements individually:
1) If you don’t have the source code, then you are not doing a good job.
Regardless of whether you use tools, or if you do it by hand, when doing a black-box assessment, lack of access to the application’s source code you will make you very inefficient.
Having access to the source code gives you the ability to understand what is going on and to write proof of concepts much more quickly, efficiently and safely (hands up who have 'bricked a server' or application during a penetration test engagement).
It is vital that the client understands the importance of giving you the code. When you are doing a black-box engagement you need to show (in the short-time allocated to the project) to your client what the problems are (and access to the source code will allow you to use your time more effectively).
If the client does not have access to the source code of the applications you are testing, that in itself is could be a problem (especially if the client paid for its development)
Note that when dealing with managed languages like Java or ,Net, one can even get away with only being given access to the application DLL’s, WAR and config files (in most cases a zip of the target web folder is all that is needed)
2) If you don’t have access to the live instance of the application, then you are not doing a good job.
Here is the reverse; if you were doing source code analysis, and, you have access to the code, but you don’t have access to a live instance of the application, you will also not be able to do as good of a job.
Because even if your focus is on the static analysis or source code analysis, you need the black-box approach and access to the application so that you can quickly:
a) understand how the application works,
b) understand if the issues your are finding are actually exploitable, and
c) pragmatically measure how much coverage & visibility your static-analysis efforts (manual or automated) really have
Please note that you don’t have to find, exploit, write and document a proof of concept for every single problem that you have (just once per vulnerability type or pattern).
Since vulnerability exploitation is a good measurement of the exploitability-level of a particular vulnerability, I am a great believer that you need to show (from business owners to developers) these exploits in action (one exploit per insecurity-pattern).
3) If you don’t write unit tests for your results, you are not doing a good job.
This scenario is applicable to both black-box and white-box.
The code idea here is that Unit Tests are something that the developers understand.
A unit test is a repeatable mechanism that allows you to replicate what you have done (i.e. the process of identifying and/or exploiting the vulnerability). It can be a positive test or a negative test. You can have a unit test that tests for something that is there or something that isn’t there (see AppScan 2011 for an example of what this could look like in practice).
From a security point of view, you should be writing unit tests that fail until the application is secure.
This is a great way to communicate with developers and gives management visibility to what is going on. It also:
- allows managers to have measurable deliverables,
- allows the developers to understand where you are coming from and be able to visualize what you are telling them.
- allows QA to be able to replicate the problem and confirm its resolution
Until you give a developer a unit test, they are unable to relate to what you are doing
4) If you don’t understand the client's business model , you are not doing a good job.
This is very important!
In order to provide recommendations to the client (that makes sense to them from a business point of view), you have to understand the target application and the way the client's business works.
If you don’t understand the client's business model, what risks they care about and what is their history in Web Application Security, then you are 'talking in a bubble' and somebody on the client's side (who is probably less prepared than you) is going to have to try to figure out how what your 'mumbo-jumbo-tech-talk-and-presentation actually means to their business.
Note that from a technical point of view, you (the security consultant) have a much better understanding of the security implications of the issues reported. If you are able to allocate enough time to understand the client's business model, you can cross-map both worlds and give the client a much more accurate representation of that application's risk profile (and what should be done next)
5) If you are not writing WAF rules or patching the app, you are also not doing a good job.
The power of writing WAF (Web Application Firewall) rules, is that you are give the client a short-term solution for the problem to be fixed (or depending on the problem and patch, a medium to long term solution).
This is very important because when you get into virtual patching, you allow customers to quickly mitigate or reduce the risk, and gives them some breathing space plus the ability to strategically think about what they want to do.
It even gives them the ability to not fix it, if that’s what they decide (i.e. they accept the risk).
Either case, you have done your job – i.e. you analyzed the application, found security issues, provided practical remediation measures, and helped them (the client) to reduce their risk exposure.
Once the marked evolves a bit more, I think that WAF writing rules, and WAF rules verification will become another profitable service provided my Application Security Consultancy companies (as a preview of how this market will also need to be played under an Open Source umbrella, check out what Breach is doing with the OWASP ModSecurity Core Rule Set Project).
6) If you are not giving the developers ‘auto code fixers’, then you are also not doing a good job.
A security consultant, especially one that understands programming, is in a much better position to evaluate the security implications of the multiple strategies & techniques that could be used when fixing (at the source code) a particular vulnerability.
One of the areas that I want to spend resources in the future is actually writing 'auto-code-fixers'. These 'code aids' would go into the developer IDE and would be exposed like the current IDE's code fixing/re-witing features (I wrote a very sweet PoC for Rational's Software Analyzer product which loaded up an 'O2 massaged' source-code file and provided the developer the option to fix one of the reported findings).
Of course some people are not conformable with providing direct code snippets to developers which could end in production environments, (and the developer & its boss will need to tick the box that says ‘I accept responsibility for this’), but by exposing this information to the developers, there is a much better chance that all relevant parties will gain a much better understanding of the root causes of the issue reported, and the suggested (from a security point of view) solutions.