Wednesday 24 September 2008

So what can I do with O2?

In my first post (http://diniscruz.blogspot.com/2008/09/ouncelabs-releases-my-research-tools.html) I explained why I created O2 and how it fits in Ounce’s world. In this post I will delve into what O2 allows me to do and how it revolutionized the way I perform source code security assessments.

It is probably better if I first explain how I approach these types of projects so that I can them show how O2 first perfectly into it.

This is the way I view these security assessments: There is a client who is paying me to review their web application for issues that might have business impact, where I am also expected to help to identify the underlying root causes of the issues discovered and provide assistance with the possible remediation paths. The client usually looks at me for guidance on what I need to do my job, and expects in return objective answers.


I always ask for: a) Source Code, b) access to test environment (with test data and multiple privileges accounts), c) as much documentation as possible and d) access to the main developers. The bottom line is that since I am expected to review in days or weeks something that took years to developed, I need to tilt the odds a little bit to my side J

That said, the key things that I need are items a) Source Code and b) access to a test environment, since that allows me to have the best of both ‘Black and White Boxes world’ (btw, I don’t want to ever do another pure Black Box engagement (i.e. with no access to source code) since it is a very frustrating, time consuming and non-cost effective security assessment technique (and pure WhiteBox are only marginally better (the best is to have both))).

I use the live environment (supported by the source-code) to:
  • give me a good picture of what the application is supposed to do,
  • allow me to cross-check the exposed attack surface with the one I am ‘seeing’ in the code,
  • quickly write Proof-of-Concept exploits (since in my view, for most cases, if you can’t prove that a vulnerability is exploitable, you can’t really give a solid risk classification to the client).
I like to call this ‘Source-Code Driven Pen-tests’ J

The way I like to report vulnerabilities is by focusing on ‘Insecure Patterns’, i.e. find a particular vulnerability (always mapped to business impact and risk) and model it in order to identify its pattern(s). It could be that they are not correctly validating data, it could be a lack of authorization checks, it could be a business logic problem (where the expected business functionality is not enforced by the application’s code), etc… What matters is that once you find a bunch of entry points that end up in exploitable sinks, they usually can be grouped together via these ‘insecure patterns’. And when you have access to the source code, these patterns can be really easy to identify. And even when it takes a while to find a particular exploit pattern, when it is discovered, we can now scan the rest of the code base for similar patterns.

‘Insecure Patterns’ are also great ways to talk with development teams (since usually one only has to prove exploitability on one example) and it allows the biggest return on investment when applying fixes. This last point is very important, because I am so tired of reporting (for example) XSS vulnerabilities that don’t get properly mitigated (usually what happens is that the ‘fixes’ applied are designed to address the exact cases shown in the Proof-of-concepts (and not addressing the underlying issues)). So when talking about (for example) an XSS vulnerability, I would frame it as a ‘lack of data validation on user displayed data’ which shows how on Application XYZ there are multiple paths for malicious data to enter the system, which are then shown via the web view layer using controls that perform no validation. The focus on this example, would be to get the developers to add output encoding (using something like the .Net AntiXSS, or OWASP’s AntiSammy) to the user displayed data (since that is the most effective location to stop ALL XSS attack vectors)

And here is where O2 comes into play. In order to be able to perform this level of analysis and cover the entire spectrum of security vulnerabilities than an web application might have, I need a way to have TOTAL visibility into what is going on, and I need an engine that is able effectively perform complex searches and calculations for me.

You might now ask, but isn’t that type of analysis exactly what a source code scanning tool (like Ounce’s or Fortify’s) should be providing out-of-the Box?

Where it would be great if they could, the reality is that these applications are SO complex and interconnected, that it is impossible for ANY tool to automatically understand what is going on and, even more important, to understand what THE CLIENT is really worried about.

To put this into context, here is a list of vulnerabilities that I discovered using O2 (with access to the scanning modules):

  • List of .NET Web Controls (*.ascx) that are vulnerable to SQL Injection, where the vulnerability is located on an external Web Services (I used O2 to ‘glue’ two separate assessments)
  • From a list of hundreds, only list the Web Services methods that receive tainted data via strings or that don’t go via a Int32.Parse() conversion
  • Find authorization vulnerabilities by mapping attacker controllable variables to stored procedures parameters (think ‘changing the value of a shopping cart checkout value’). A variation of this issue is to find vulnerabilities related to the ‘Spring MVC Autobinding vulnerability’ we disclosed a while back
  • Remote web proxy vulnerability where the attacker’s payload (the URL to fetch) was inserted on a hidden HTML form variable and traveled quite far into the application before it reached the exploitable sinks
  • Race condition where depending on the page-invocation sequence, the user was able to perform unauthorized actions
  • Find authentication flaws created by developers ‘forgetting’ to use ‘standard authentication calls’
  • Find vulnerabilities created by ‘reality altering functions’ like reflection
  • Find vulnerabilities created by ‘object payload propagation’ classes like setAttribute/getAttribute pairs (i.e. HashMaps), where generic objects are used to carry tainted data from one section of the application into another (for example MVC based applications)
  • …. the bottom line is that if it can be found by a Human doing manual code analysis, using O2, the same security consultant will find it sooner and will have a much better picture of its real impact and occurrence throughout the application
Also very important, is the fact that because I have access to an object model of the entire code base, I am able to quickly make an assessment of where vulnerabilities are systemic of a one-off mistake (very important to know when providing remediation advise).

Another key asset that I get from O2, is the ability to be assure that a particular issue DOESN”T exist on the application (or that the case I found is the only one). Using a combination of CIRData analysis, Traces searches and (always) RegEx queries, I can very quickly confirm that a particular issue (or pattern) doesn’t exist on the application. This is very important since it allows the security consultant to be confident that the results discovered are an accurate representation of the application’s vulnerable state.

The big caveat on the previous paragraphs is that, we can only reach this level of assurance, once we are confident that we have total visibility into the application’s attack surface and code routing practices. So our job (the knowledgeable security consultant) is to map this attack surface! (and this can be particularly hard when the target application uses popular (or custom build) frameworks that create all sorts of abstraction layers).

And here is the key of my paradigm shift!

A while back I realized that custom programming and scripting was a natural part of an application security assessment. And the reason is simple. These applications are so custom and unique, that the only way to effectively test them is to write custom testing code (i.e. exploits).

So the only difference between now and them, is that before I used to do my coding on top of a personal developed set of scripts and using very raw data sets (usually RegEx based), and now I do it on top of a massive technological framework (O2 + Core) using very powerful data sets (Object model of application, Data Traces, engine able to follow tainted data, powerful scripting engine, etc…)

Ultimately, what I like is the fact that I now have CONTROL!!!!! I have a scripting engine that I can use to automate the questions that I want to ask, and the use it to massage the answers so that I get solid and trustworthy results.

Hopefully this makes sense (if not please leave a comment below)

This is the last (for now) of the ‘theoretical’ posts, and the next bunch will be all filled with practical examples using HacmeBank and WebGoat

Dinis