Tuesday 25 October 2011

First Answer to: Why doesn't SAST have better Framework support (for example Spring MVC)?

A couple days ago I received the question and asked here on this blog Why doesn't SAST have better Framework support (for example Spring MVC)? (if don't don't what SAST means, see What does SAST mean? And where does it come from?)

I wrote the answer below on that day, but since I also posted this question to the O2 mailing list I wanted to give some space for others to chip in with their views (which they did, namely John Steven who I will reply to later):

There are a number of reasons why the tool vendors have not been able to provide decent (or even any) wide Framework Support on their tools


Note that this is not for lack of trying, for example the latest version of AppScan already supports WAFL (Web Application Flow Language) which is their attempt at creating a Framework descriptor language, HP is doing interesting work in their integration of WebInspect+Fortify and there are a couple new players (like WhiteHat, Veracode, Armorize) that claim will do a better job.

For me, the key problem that all tools have (not only SAST, but this is critical in SAST) is that they are all trying to find a 'big red button' while ignoring how the app actually works/behaves. They basically want to create a product that can just be pointed to an application and work.

The problem with this approach is that all apps are massively different! 

The apps themselves are build on top of MASSIVE frameworks (from a point of view of their behaviour), and even when they use common frameworks (vs writing their own frameworks), the way the actual code flows tends to be quite unique per app.

So by trying to treat the "Application Behaviour' as a black box, and choosing NOT to try to (really) understand/map how it works (beyond the default Java/.NET functionality or whatever 'Framework Support' they are able to have), these tools are trying to climb a mountain that is far too big and complex.

My approach with O2 has been "I know I will have to map how the application works/behaves and that I will need to create (from the source-code or dynamic analysis) an working model of its real code/data-flows, and while I'm there, also create a set of rules for the tools that I can use. My only question is: how long will it take to gain the level of visibility that I will need in order to be able to do a good job". This is what I call 'playing the Application Visibility game'

Basically with O2 I'm climbing a complete different mountain.

Lets take for example Spring MVC. The first things I do when looking at a Spring app are:
  • review the source code in order to 'codify' how the controllers are configured and what is their behaviour (namely the URLs, Command Classes and Views). 
    • paying special attention to any 'Framework behaviour modifications', for example filters, authentication/authorization engines,  or even direct spring MVC code patches
  • then I continue these mappings into the inner-working of the application in order to identify its 'hyper jumps' (reflection, aop, setters/getters, hash-objects-used-to-carry-data, web services, data/storage layers, other abstraction layers, etc...) and  'data changing' steps like validation or object casting.
  • then I map out the connection between the controllers and the views (which is very important because we can't assume that there will be path into all views from all controllers) 
  • then....  (next actions depend on how the app is designed and what other APIs or Frameworks are used)

When I'm doing these steps, I (using O2) tend to do three things:
  • Create mini tools that visualize what is going on (for example url mappings to controllers, or the complete command classes objects )
  • Create Browser-Automation APIs that represent the expected behaviour of the target application (how to login, how to perform action XYZ, how to invoke a Web Service, etc...)
  • Mass create rules for the tools available (for example I used to create 1000s of Ounce rules so that I would get the most of its Engine by getting it to create as many taint-flow traces as possible
So yes, I'm coding all the time

The only difference between engagements, is that I'm able to build on the technology developed on the previous engagements.

Again using Spring MVC as an example:
  • First time I saw Spring MVC I had a script that did a dirty read of the XML files and extracted some metadata (with a lot of manual mappings)
  • On next engagement I was able to add support for Java bytecode analysis and analyse the Spring MVC attributes (used to mass create Ounce rules)
  • On next engagement , I was able to start visualizing the Command Classes and created an generic API for Spring MVC (with specific classes/objects to store Spring MVC metadata in a way that made sense to us (security consultants))
  • On next engagement , I added a number of real powerful GUIs, improved the CommandClass resolution calculations and did a bunch of mappings between controllers and viewers
  • On next engagement , I already had most of the core Spring MVC behaviour scripts in place, so I mainly focused on what specific about the application being analyzed
As you can see, although there is always some level of customization, its amount (and skill level) is reduced on each interaction (and this is how we will scale this type of analysis).

So to play this game (and to be able to do this type of analysis), this is what is needed from the tools used (in this case SAST)
  • Ability to write scripts that directly control how the tool works 
    • Ideally most of the tool's analysis capabilities is written in 'dynamically compiled scripts' so that it is possible to modify/adjust them to the current reality (created by the application being analysed)
  • Ability to have direct access the tools internal capabilities via exposed APIs
  • Ability to start and stop each analysis phase (with each phase providing a modifiable dump of its internal representations and analysis so far)
  • Ability to consume, feed and correlate data from all sorts of sources: file system, config files, black-box scans, fuzzers, real-time instrumentation, security consultant's brain
  • Ability to mass create/manipulate rules
  • Ability to write rules as scripts AND in a fast-prototyping language like: C#, Java, Python, Ruby or Javascript (i.e. not in C/C++ or XML)
  • Ability to easily 'process, filter and visualize in real-time' thousands if not millions of findings (created by the large number of rules applied)
  • Ability to create rules that analyse the thousands if not millions of findings findings created (i.e. create findings from findings)
    • this is the ability to perform multi-phase analysis, each using different rules/techniques and targeted at a different types of vulnerabilities (for example SQL Injection vs Direct Object References)
  • Ability to visualize the data that was created (in its multiple stages of maturity) so that a security consultant (and/or app developer) can help to connect the dots (with more scripts or config settings)
  • Ability to add 'business logic analysis' to the findings discovered. (for example when taking  Authorization and Authentication activities in account, an 'direct SQL execution' or 'file upload' security vulnerability finding in an admin panel, might actually be a feature) 
  • Ability to re-package the final findings into the SDL tools currently used by the client (bug tracking, collaboration, IDEs), in a way that makes sense to the client (i.e. using their terminology and workflows) and is immediately consumed
  • Ability to package all analysis (and rules, workflows, scripts, etc...) into a single execution point (i.e. an *.exe). This is the 'big button'  that can be inserted into the Build process
  • Ability to execute individually the complete analysis required to confirm (and ideally to exploit) a particular issue. This is the 'small button that can check if ONE issue has been fixed'
And here you can see why the SAST tools really struggle with frameworks, because they don't want to play this game. Ironically the end result is the same 'big button to press and get solid results' , the only difference is how to get there.

My personal view (backed by real world experience) is that this is the only way that 'good enough' framework support can be added to a SAST tool in a way that it will actually be usable by developers.

Note that I said 'good enough', because usually the comment I receive when explaining that we need to do this is "..well but only you (Dinis) wants this... and what we (tool vendor XYZ) wants to do, is to provide 'Good Enough' support  ". 

Unfortunately for the tool vendors, I'm not asking for them to create a tool that would only add value to a small number of 'expert security consultants'. I'm describing what they will need to do in order to add 'good enough' support for frameworks to their tools. Only then security consultants and app developers can customize those tools and deploy them to a wide audience (finally being able to have 'decent support' for the frameworks used and the target apps). The cases where there is no need to customize the engine (or rules) should be seen as 'free passes' (i.e. easy sales)

The bottom-line is that, if the path chosen by the tool vendors really worked, then today (Oct 2011), we should have much better Framework support in our tools. The reality is that we don't even have in our current SAST tools decent support for vanilla Java or .NET language behaviours (for example: reflection, collections, arrays, base-classes behaviour). And part of the reason of currently struggle with Java or .NET, is because its core libraries are in itself a Framework :)

The good news is that I have shown with O2 how my proposed model can work in the real-work. It was done on top of an Open Source platform (O2), and it is out there for others to learn and copy

Unfortunately, I am one of the few O2 users that can really do this, so the next step is to find a way to scale O2's techniques/usability and help SAST (and others) tools to develop/expose similar technology and wokflows.

Finally, the other reason why the tools vendors are not doing this is because there is very little 'public' (i.e. 'on the record') customer demand for this!  Those nasty NDAs have a powerful side-effect on buyers (and end users) who won't publicly say what they really think.

So in some ways, it is not 100% the vendors fault. They tend to react to their paying customers needs, who (since they can't say "the tool doesn't really work in my environment") tend to ask for thinks like: "You need to be able scan XYZ Millions of line of code", "You need to have support for Oracle databases", "you need to have a report for the PCI XYZ", "You need to support language XYZ",  etc...

Add to this the fact that SAST vendors :
  • don't see the security consulting companies (who would ask for the capabilities described above) as their partners (i.e. they try to get as much money from them as possible), 
  • want to control all/most the technology that they consume/create
  • don't have enough paying customers that put them to the ropes and demand that their tools really work
  • still believe (or want to believe) that their tools actually work
  • don't have to deal with the side-effects of 'applications scanned by their product got exploited by malicious attackers' (i.e. got sued by their clients or by the attacker's victims)
and you have a world where the SAST vendors don't have an direct incentive to go down this path.

Note that some paying customers DO get some value from the current SAST tools  (the ones that don't have SAST tools as shelfware). And since there are no popular alternatives (O2's market share is still very small :)  ), these customers are resigned with the current status-quo (the others are trying to ignore the fact that they spent a pile of money of a tool that they have not found a way to work in their environment, or are trying to hire a consulting company to make it work).

The tragedy is that SAST's marked could be enormous!!!

Just imagine that we were able to use SAST tools in a way that they were really able to map/visualize/analyze an entire code/data flow, and create 'solid, defensible and comprehensive' results (with very low False Positives and False Negatives)

Don't you think the developers (and managers architects, buyers, consumer groups, government agencies,  etc..) would be ALL over it?

This is what I am to say in my 'Making Security Invisible by Becoming the Developer's Best Friends' presentation. If only we could be the developer's best friends by showing them how their app actually works and what are the side effects of their code :)