Sunday, 26 September 2010

How many 'easy-to-fix' XSS have we fixed in the last 12 months

On the topic of 'easy-to-fix' XSS, one question that we should be able to answer (as an industry) is 'How many of those easy-to-fix have been actually fixed and pushed into production'.

After all, if they are 'easy' to fix (and cheaper to create,test, deploy, etc...) surely that means that the affected parties (application owners and developers) will have very little resistance in doing those fixes, right?

So where can I find these numbers? What I am after are three values:

1) # number of 'easy-to-fix' XSS discovered on real-world applications
2) # number of 'easy-to-fix' XSS discovered that were actually fixed and pushed into production
3) % of XSS discovered that fall into the 'easy-to-fix' category

If we don't have these numbers, how do we know we are being effective? and that they are indeed 'easy-to-fix'

On a Twitter thread after my previous blog post, Chris from Veracode commented that he guesses (i.e. no hard-data) that about 50% of the 'easy-to-fix' XSS that they have found at Veracode are now fixed and deployed into production.

Even assuming that that 50% number is correct (which looks a bit high to me, and if I remember correctly WhiteHat's numbers are much lower), shouldn't the 'easy-to-fix' number be much higher? After all they are 'easy-to-fix'

.....

As you can tell by my use of quotes on the 'easy-to-fix' concept, I don't buy that fixing XSS are easy to fix.

Even in the cases where the 'fix' is 'just' applying an encoding to particular method (let's say from Response.Write(Request["name"]) to Response.Write(AntiXSS.HtmlEncode(Request["Name"])) ) there is a big difference between 'making a code change' and 'pushing it into production'.

Here is a quick list of what should happen when a developer knows about an 'easy-to-fix' XSS vulnerability:

1) XSS is discovered by security consultant
2) XSS is communicated to the development team
3) Development team analyse the XSS finding:
- reproduce the XSS reported
- discover what causes the XSS (root-cause analysis)?
- what is the DOM context of the XSS injection point (Html, Attribute, Javascript block, CSS)
- was it a single instance/mistake or is it a systemic flaw?
- where can it be fixed?
- of the multiple places it can be fixed what is the one with the least impact?
- can it be solved/mitigated without making code changes, for example using an config setting or a WAF? (i.e. virtual patching)
- is there a clear understanding of the side-effects of applying the fixes?
- are there cases (or potential) for double encoding problems? (i.e. is there understanding of all the code + data paths that lead to the location of the fix?)
- were other parts of the application created with the assumption that there would be no encoding on that particular field? (which will break once the fix is applied)
- who is going to pay for the fix? the developers? the client?
4) Once a strategy is created to fix the XSS, put it on the development schedule and apply the fix
5) Test the fix and make sure:
- that the XSS was correctly resolved/mitigated (who will do this? the current testers/developers that were not aware of the XSS in the first place, or the original security consultant?)
- is there any business impact (i.e. does the application still behaves the same way and there is NO user-experience impact). Ideally this should be done by the QA team
6) Deploy the fix into production

(note that this is a simplified version of what tends to happen in the real world, since there are many other factors that affect this: from internal/external politics, to management support for security fixes, to lack of attacks, to the fact that the original team that created the application is long gone and the developer allocated to do the 'easy-to-fix' change doesn't really know how the application he/she is about to fix actually works,etc...)

Some would call the above process 'easy-to-fix' ....

For me 'easy-to-fix' code changes (if there is such a thing) are actions that:
- don't take much time,
- the amount of work (and side effects) that needs to be done is easy to understand/visualise
- are cheap
- can be deployed into production quickly and without worries
- DON'T have any potential to create business/user impact (this is the most important one)
- are in essence, invisible to just about all parties involved

I think the crowd that calls XSS 'easy-to-fix' are confusing 'looks easy to make the code change that I think would work' with 'making a code change that accurately resolves the reported problem without causing any impact to the application' (which is what matters to the developers/business-owners)

My fundamental problem with the 'easy-to-fix' concept is that it is calling the developers (and application owners):
- names (as in: "you guys are stupid for not fixing those problems, after all they are 'easy-to-fix'),
- it is alienating them, and
- it is showing them that we don't have a very good idea on how their applications and business work.

To see a more detailed explanation of this gap between security consultants 'recommendations' and what the business/developers think of it, see John Viega's Keynote at OWASP's AppSec Ireland conference