Since he is Linux/Python based, he can't reuse the C# test cases that I created a while back (see Testing TeamMentor 2.0 security using O2) which is not a problem since this will help to expand TeamMentor testability into the Linux/OSx world.
Once Arvind looked at TM WebServices (all 100+ of them), he was like '.. humm do I need to test all these using Burp?' to which I replied '...well...not only that would be close to impossible .... what I really want you to do is to script your tests so that we have:
a) a set of test scripts that invoke the TM Webservices in a way that are supposed to be used and
b) abuse/security tests on top of those'
a) a set of test scripts that invoke the TM Webservices in a way that are supposed to be used and
b) abuse/security tests on top of those'
And this is exactly what is happing. Here are 3 Python scripts (by chronological order) that show the evolution of Arvind's scripts (this is from a private GitHub repository, but soon will be on a public one):
- https://gist.github.com/2471374 - Connect to WebServices and login
- https://gist.github.com/2471382 - First pass at creating a tool to test anonymous access
- https://gist.github.com/2471402 - Better, more scalable and documented version
At the moment these scripts are still in 'stand-alone' mode, and what is going to happen next is that we are going to create UnitTests for each of TeamMentor's Webservices.
And here is a point I really want to make: Doing a comprehensive Security Testing of WebServices, without access (or create) to such Testing Infrastructure (i.e. ability to invoke each WebMethod individually) it is just about impossible, since there is not enough state to inject the security payloads/abuse cases.
Or in another words:
... First you create Tests for WebServices, then you add the abuse/security cases...
And this is exactly the point where Arvind is at the moment, he can invoke all webservices, but he cannot move much further since doesn't have enough 'business data' to successfully invoke most methods.
Let's walk-through a real example, for example: Creating an TeamMentor Library.
- To create a Library the SOAP (or JSON) request must be well formated
- A valid session Id must also be provided (retrieved via a successfully login) together with the CSRF token
- There are 3 possible responses from this action:
- A) Library created OK
- B) Library failed to be created (clue given on the response)
- C) Something else happened (for example unexpected error thrown)
- Depending on the input, the response could be an issue or not (for example did we fail to create a library when we should had been able to create it or vice-versa). What about the C cases when something unexpected happened? Did it fail safely?
- In TeamMentor, when a library is created, there is an XML file and folder created with its name. So a library created outside the Library DB root, would be a security issue. This is protected by a RegEx, so lets see how effective that RegEx is.
- For libraries correctly created (some maybe with payloads), an analysis will need to be done on the other locations where that data is now used (not only on the webservices layer, but also on the GUI side if things)
- There are also a bunch of usability issues created by the need to protect the app, so It is really important to have visibility into what works and what doesn't (for example what happens if a library is called 'hello ..\.. world'
Now this type of testing/security analysis is in my view what we need to be doing at the time. And to be fair, most good application security tests will (try to) do this. The problems are :
- they will either to do it manually or in their obscure scripting world,
- they will struggle to scale (specially on large number of WebServices)
- usually they deliver the end result in PDFs (and from No more PDFs with Security Findings you can see how much I like these PDFs).
Even more importantly, look at the two types of tests that we are creating:
- set of test scripts that invoke the TM Webservices in a way that are supposed to be used
- abuse/security tests on top of those
Although these tests are created by the 'Application Security' effort (because they can't do their job with them (while the developers and QA can)), these tests are also VERY useful for developers and QA.
For example, I want to run them on each build and add them as one of the 'can we ship version xyz criteria' (i.e.0% test fails)
This is a perfect example of Security adding value to the Software development practice, and evolving into Engineering Productivity
As a TM developer, this is exactly what I want Application Security to give me:
- Better tests
- Abuse cases for those tests
- More test coverage
- Better understanding of how my app works/behaves
- Resuable code that I can add to my test suite and CI environment