Fuzzing is the art of sending unexpected stuff to an application and analysing its result. This is a very important part of making an app resilient since it (the app) should be able to handle unexpected/crazy inputs.
In my view there are 3 key elements that need to be in place when doing a fuzzing session:
- Execution environment
- Fuzz payloads/state
- Fuzz targets/analysis
Lets look at each one in turn:
1) Execution environment:
Before starting a fuzzing session one must create an environment that will allow us to create automated fuzz sessions. Here are some of the components needed (in this case slightly tweaked for fuzzing TM web-services):
- Automatically create a clean testing environment/target (with clean database and some test content)
- Detect when the database has been corrupted, and trigger an rebuild (which in TM means recreating the XML files and/or source-code files)
- Ability to invoke the target WebServices (which is the Pyhton API that Arvind has been creating)
- (ideally) Ability to run multiple instances of the target server at the same time (to allow multi-thread requests)
- Ability to detect abnormal behaviour on the application: weird responses, requests that take too long, large server CPU spikes, non-expected file access (maybe outside the TM directory)
- Ability to detect when/if the server crashes (ideally running the target app under a debugger)
- Speed-up or slow-down requests in order to find the maximum request load supported by the server (if you are running a fuzzing session locally you should be able to create more requests than the server can handle it)
2) Fuzz payloads/state:
In terms of Fuzzing Payloads a great place to start is the FuzzDB which is a good base line for fuzzing strings to send to an application.
Once we have a way to consume these payloads, the key is to adjust them to the target methods. Specially the ones that require some state (i.e. we need to provide some valid data or the payload never reaches the app)
So yes, some customisations will be needed on a per WebService method basis, since that is the only way to ensure maximum coverage.
Also very important is to look at the state/data returned from the WebService (with special attention being placed on cases where a payload sent to WebMethod A is returned from a normal request sent to WebMethod B).
Lack of 'understanding state' is the single reason why fuzzing is very hard. But without it we are just doing a 'fuzz-everything-that-moves' strategy (which sometimes works).
Finally, one must differentiate Infrastructure fuzzing vs Application Fuzzing (although both are very important). Infrastructure fuzzing is when one fuzzes the underlying services like ASP.NET (in TM Case). This type of tests should be done once, and its results taken into consideration (for example fuzzing "GUID values on a method that expects a GUID" or "payloads on Headers" only really needs to be done once)
3) Fuzz targets/analysis:
When fuzzing one must have very specific targets and analysis in mind.
For example here are a couple Fuzzing Targets:
- Fuzz all WebService's methods with a small subset of crazy payloads (xss/sqi strings, large strings, negative numbers, large numbers, non ascii chars, weird Unicode chars, etc...)
- Fuzz all WebService's methods with valid state and all strings replaced with:
- XSS Payloads
- SQLi Payloads
- Directory transversal payloads
- Fuzz all WebService's methods with valid state and all GUID replaced with
- Random GUIDS
- Valid GUIDs that should not be be accepted (for example an existing FolderID used on CreateFolder)
- Fuzz Authentication methods for authentication specific issues (for example brute force account/passwords attacks)
- Fuzz content creation methods for XSS data injection
- Fuzz methods used by multiple users (for example an editor and admin) and see if payloads injected by an editor are shown to admins
- Fuzz methods in random invocation sequence (to try to detect weird race-conditions created by a particular test sequence)
- After creating a mapping of what methods can be invoked by what users (a very specific type of fuzzing)
- Fuzz the methods that should not be accessed by a particular user to see if there are blind spots that (due to a bug/vulnerability) enable that execution to occur
- Create requests that can be 'consumed' by 3rd party scanners (like Zap, Burp, Netsparker, AppScan, etc...) and:
- trigger those tests
- consume its results
- Fuzz the Fuzz results
- There will be cases where the fuzz targets will be the fuzz results of previous sessions
These tests will generate a LOT of data, which needs to be:
- Normalised (with similar responses reported as one)
- Stored (as raw as possible) to allow later analysis
- Analysed, taking into account:
- the expected result for the fuzzed method
- the type of test being performed
- Reported (in an easy to consume and replicate format)
It is important to restate that each type target requires a different type of analysis. Taking into account what is being tested and (more importantly) what is the expected method invocation result.
Another very important concept is the need to have a fully automated fuzzing environment. This should be a 'fuzz-and-forget' world where the fuzz tests are executed without any human intervention (don't forget to add a 'stop' button :) )
Although this sounds like a lot, the harder part is to create the environment required to execute the first couple Fuzzing Targets (as described above). Once that is done, the rest are variations.
Finally, always keep in mind that the objective is to create something that can be added to build environment so that these tests can be executed automatically (with any new 'findings/fixes' reported to the developers)
Related Posts
- Documenting how to test WebServices using scripts - the story so far
- Roadmap for Testing an WebService's Authorization Model
- Secure coding (and Application Security) must be invisible to developers
- Security evolution into Engineering Productivity
- "Making Security Invisible by Becoming the Developer's Best Friends" presentation