I've been asked by Lisa Crispin to contribute a few paragraphs on security testing to an upcoming book on agile testing that she and Janet Gregory are co-authoring. Here's what I came up with:
Security testing is a broad topic that cannot be possibly covered in a few paragraphs. Whole books have been devoted to this subject. Here we will try to at least provide some guidelines and pointers to books and tools that might prove useful to agile teams interested in security testing.
Just like functional testing, security testing can be viewed and conducted from two perspectives: from the inside out (white-box testing) and from the outside in (black-box testing).
Inside-out security testing assumes that the source code for the application under test is available to the testers. The code can be analyzed statically with a variety of tools that try to discover common coding errors which can make the application vulnerable to attacks such as buffer overflows or format string attacks. (Resources:
http://en.wikipedia.org/wiki/Buffer_overflow and
http://en.wikipedia.org/wiki/Format_string_vulnerabilities)
A list of tools that can be used for static code analysis can be found here:
http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis
The fact that the testers have access to the source code of the application also means that they can map what some books call "the attack surface" of the application, which is the list of all the inputs and resources used by the program under test. Armed with a knowledge of the attack surface, testers can then apply a variety of techniques that attempt to break the security of the application. A very effective class of such techniques is called fuzzing and is based on fault injection. Using this technique, the testers try to make the application fail by feeding it various types of inputs (hence the term fault injection). These inputs range from carefully crafted strings used in SQL Injection attacks, to random byte changes in given input files, to random strings fed as command line arguments. (Resources:
http://www.fuzzing.org/category/fuzzing-book/ and
http://www.fuzzing.org/fuzzing-software)
The outside-in approach is the one mostly used by attackers that try to penetrate into the servers or the network hosting your application. As a security tester, you need to have the same mindset that attackers do, which means that you have to use your creativity in discovering and exploiting vulnerabilities in your own application. You also need to stay up to date with the latest security news and updates related to the platform/operating system your application runs on. These tasks are by no means easy, they require extensive knowledge, and as such are mostly outsourced to third parties that specialize in security testing.
So what are agile testers to do when faced with the apparently insurmountable task of testing the security of their application? Here are some practical, pragmatic steps that anybody can follow:
1. Adopt a continuous integration (CI) process that periodically runs a suite of automated tests against your application.
2. Learn how to use one or more open source static code analysis tools. Add a step to your CI process which consists of running these tools against your application code. Mark the step as failed it the tools find any critical vulnerabilities.
3. Install an automated security vulnerability scanner such as Nessus
(http://www.nessus.org/nessus/). Nessus can be run in a command-line, non-GUI mode, which makes it suitable for inclusion in a CI tool. Add a step to your CI process which consists of running Nessus against your application. Capture the Nessus output in a file and parse that file for any high importance security holes found by the scanner. Mark the step as FAIL when any such holes are found.
4. Learn how to use one or more open source fuzzing tools. Add a step to your CI process which consists of running these tools against your application code. Mark the step as failed it the tools find any critical vulnerabilities.
As with any automated testing effort, running these tools is no guarantee that your code and your application will be free of security defects. However, running these tools will go a long way towards improving the quality of your application in terms of security. As always, the 80/20 rule applies. These tools will probably find the 80% most common security bugs out there while requiring 20% of your security budget.
To find the remaining 20% security defects, you're well advised to spend the other 80% of your security budget on high quality security experts. They will be able to test your application security thoroughly by the use of techniques such as SQL injection, code injection, remote code inclusion and cross-site scripting. While there are some tools that try to automate some of these techniques, they are no match for a trained professional who takes the time to understand the inner workings of your application in order to craft the perfect attack against it.
Subscribe to:
Post Comments (Atom)
Modifying EC2 security groups via AWS Lambda functions
One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...
-
Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms inte...
-
Update 02/26/07 -------- The link to the old httperf page wasn't working anymore. I updated it and pointed it to the new page at HP. Her...
-
I've been using dnspython lately for transferring some DNS zone files from one name server to another. I found the package extremely us...
No comments:
Post a Comment