Monday, 8 December 2008

Report: Manual vs. Automated Vulnerability Assessment

Here is a very interesting research paper published on October 26, 2008 called Manual vs. Automated Vulnerability Assessment (by James A. Kupsch  and Barton P. Miller who are part of the University of Wisconsi Vulnerability Assessment project) .

In this paper they use the Condor application (version Condor 6.7.12 which doesn't seem to be available for download on  http://www.cs.wisc.edu/condor/downloads-v2/download.pl)  and used the Commercial Source Code scanners Fortify and Coverity to perform an analysis of the security vulnerabilities that they discovered on Condon (see http://www.cs.wisc.edu/condor/security/vulnerabilities/ for the list of the 14 vulnerabilities they discovered and http://www.cs.wisc.edu/condor/security/vulnerabilities/CONDOR-2005-0001.html for a detailed description of one).

I really like this concept and I'm very happy that they were able to publish their results. Basically what they did was to say: "Ok here are a number of security vulnerabilities that we discovered ([probably manually]), so let's see what the Source Code Scanning tools  can find?"

Of course that I am bias on my interest on this research, since one of my current contracts is with Ounce Labs which is a direct competitor of Fortify and Coverity, and I've developed a number of Open Source tools (called O2) that augment Ounce Labs technology capabilities (more comments about that in a bit).

So here is what I think it is relevant about this paper:
  1. This is the a public release of Source Code Scanning analysis data, mapped to real vulnerabilities. This will allow us to perform public benchmarking of different tools and understand how those tools can be used (of course that I will bring OunceLabs to the mix here)
  2. It is pretty obvious by the results that: a) out-of-the box the tools perform very poorly, and b) a security consultant needs to be involved to triage those findings.
  3. In fact, even Fortify, who had the best results, did not discover those 6 issues immediately (they only had 3 issues identified as Critical ( and 2301 marked as Hot, 8101 marked as Warning, 5061 marked as Info)
  4. What I would like to know (and this is where the discussion becomes relevant to O2), is how much work and triage was needed to find those 6 (Fortify) and 1 (Coverity) issues? And how much manual work was needed to find the original set of vulnerabilities?
  5. I also would like to know what was the criteria needed to mark scanning results as a successful discovery? (do you just need to point to a dangerous function, or do you need to have a complete trace?).
  6. Other interesting questions are "How many of the findings (15466 on Fortify and 2686 on Coverity) where False Positives?" and "Where there any new vulnerabilities discovered by the tools?"
  7. One annoying thing is that the authors of this paper did not publish a link to the changes they made to Condor: "...with 13 small patches to compile with newer gcc; ... built as a "clipped" version, i.e., no standard universe, and no Kerberos and Quill as these would not build without extensive work on the new platform and tool chain ...". So I will contact them to see if we can replicate their test/scanning environment
  8. Just for the record, I don't know (yet) what are Ounce Labs 'out of the box' results in this case (and will publish them once I can replicate their scans). But I think they would be (before further analysis) similar to Fortify's results.
Next I will write up a O2 Challenge for this [UPDATE: here it is O2 Challenge #2) Find the 14 Vulns in Condor 6.7.12]. The plan (over the next couple months) is to use this application as a C++ case-study for how O2 can be used on the discovery of these type of issues (remember that ultimately, O2 is designed to ''automate the security consultants brain", so if a security consultant can find it, so should O2 :) . The only 'issue' should be how much custom rules/scripts will need to be added/created. 

In fact, my view is that ultimately ALL tools should be able to find these issues. The competitive advantage of Tool A vs Tool B (commercial or Open Source) should be:
  • the amount of time and customization that were required to find those issues (including set-up of scanning environment)
  • the ability to replicate those 'insecure patterns' on the entire code base
  • the reporting (namely finding's consolidation and automatic creation of Unit Tests)
  • the risk rating and thread model capabilities
  • the integration with the SDL
  • the ability to model 'positive security' (i.e. map the security controls and 'prove' that they were correctly implemented

new O2 content, Hacmebank and 1st challenge

Here is an update of the latest content added to the O2 website at http://ounceopen.squarespace.com:
One question I had is on the file format for the videos. Which one should I use: Mp4 or WMF?

The above is just a small sample of the content that I am planning to upload over the next couple weeks. So if there is an area that you really want me to cover, let me know and I will write a post about it.

Wednesday, 26 November 2008

New community website for O2

We've just created a new website for O2 users and developers: http://ounceopen.squarespace.com

In there you will find the latest source code drops, demo files and technical articles/videos.

Dinis Cruz

Sunday, 28 September 2008

ASP.NET MVC – XSS and AutoBind vulns in MVC Example

A while back (while in the middle of the research that lead to the publishing of the Security Vulnerabilities in the Spring Framework Model View Controller) I decided to check out if the (still in beta) ASP.NET MVC framework was vulnerable to it.

At first quick analysis it looks vulnerable to the AutoBinding issue (and also to XSS), so here is my draft research notes as download file ASP.NET MVC - XSS and AutoBind vulns in MVC).

Please let me know if I am missing something obvious, or maybe there is something on the new version that prevents this:
Some MVC related links:

Saturday, 27 September 2008

OWASP NYC Conference 2008

Just returned to London from the OWASP NYC Conference and as always it was a great experience (this was the biggest OWASP conference so far)

In addition to participating on the keynote speech, I delivered two presentations: OWASP Summit 2008 and 'Building a tool for Security consultants: A story of a customized source code'.

This last presentation was a variation of my previous two posts (OunceLabs releases my research tools under an Open Source License... , So what can I do with O2) and the questions I had after the presentation plus the multiple positive comments/conversations, tell me that the message that I wanted to pass was well understood and received (here is a blog post with a outline of the presentation and here is blog post that provide a good description of what wanted to say: OWASP NYC AppSec 2008 and NYSec Recap )

Wednesday, 24 September 2008

So what can I do with O2?

In my first post (http://diniscruz.blogspot.com/2008/09/ouncelabs-releases-my-research-tools.html) I explained why I created O2 and how it fits in Ounce’s world. In this post I will delve into what O2 allows me to do and how it revolutionized the way I perform source code security assessments.

It is probably better if I first explain how I approach these types of projects so that I can them show how O2 first perfectly into it.

This is the way I view these security assessments: There is a client who is paying me to review their web application for issues that might have business impact, where I am also expected to help to identify the underlying root causes of the issues discovered and provide assistance with the possible remediation paths. The client usually looks at me for guidance on what I need to do my job, and expects in return objective answers.

OunceLabs releases my research tools under an Open Source license (it’s called O2 and is hosted at CodePlex

Hello, as you probably know I have been consulting with OunceLabs (http://www.ouncelabs.com) for the past 18 Months, and on the last 9 months I have been deeply involved on an internal project which I am very excited about and is now going to be released under an Open Source license (go Ounce!!!)

One of my tasks at OunceLabs was to make their technology 'Work' from the point of view of an advanced security consultant (like me). By 'Work' I mean create a model that uses (sorry for the cliché) People + Process + Ounce Technology whereby the later (Ounce Technology) is used throughout an entire engagement (versus the current model where it is mainly used at the beginning of the engagement or to perform specific analysis).