Details
pyUnitPerf tests are meant to transparently add performance testing capabilities to existing pyUnit test suites. The pyUnitPerf framework introduces 2 new types of tests:
- TimedTest: runs an existing pyUnit test case by imposing a limit to the time it takes to run the test
- LoadTest: runs an existing pyUnit test case by simulating concurrent users and iterations
Assume you have the following pyUnit test case in a file called ExampleTestCase.py:
from unittest import TestCase, TestSuite, TextTestRunner, makeSuite
import time
class ExampleTestCase(TestCase):
def __init__(self, name):
TestCase.__init__(self, name)
def testOneSecondResponse(self):
time.sleep(1)
def suite(self):
return makeSuite(self.__class__)
if __name__ == "__main__":
example = ExampleTestCase("testOneSecondResponse")
runner = TextTestRunner()
Admitedly this is a contrived example, since the testOneSecondResponse method simply sleeps for 1 second and does not actually test anything, but it serves to illustrate the pyUnitPerf functionality.
Assume you want to create a timed test that waits for the completion of the ExampleTestCase.testOneSecondResponse method and then fails if the elapsed time exceeded 1 second. With pyUnitPerf, all you need to do is write the following code in a file called ExampleTimedTest.py:
from unittest import TestSuite, TextTestRunner
from ExampleTestCase import ExampleTestCase
from LoadTest import LoadTest
from TimedTest import TimedTest
class ExampleTimedTest:
def __init__(self):
self.toleranceInSec = 0.05
def suite(self):
s = TestSuite()
s.addTest(self.make1SecondResponseTimedTest())
return s
def make1SecondResponseTimedTest(self):
"""
Decorates a one second response time test as a
timed test with a maximum elapsed time of 1 second
"""
maxElapsedTimeInSec = 1 + self.toleranceInSec
testCase = ExampleTestCase("testOneSecondResponse")
timedTest = TimedTest(testCase, maxElapsedTimeInSec)
return timedTest
if __name__ == "__main__":
TextTestRunner(verbosity=2).run(ExampleTimedTest().suite())
The suite() method constructs a TestSuite object and adds to it the test object returned by the make1SecondResponseTimedTest method. This method instantiates an ExampleTestCase object, passing it the method name to be tested: testOneSecondResponse. We then pass the testCase object to a TimedTest object, together with the desired maximum time to wait for the completion of the test (to which we add a 50 msec. tolerance to account for time potentially spent setting up and tearing down the test case). In the __main__ section of the module, we simply call the pyUnit TextTestRunner, passing it the suite.
If you run: python ExampleTimedTest.py at a command prompt, you will get the following output:
testOneSecondResponse (ExampleTestCase.ExampleTestCase) ... ok
TimedTest (WAITING): testOneSecondResponse (ExampleTestCase.ExampleTestCase): 1.0 sec.
----------------------------------------------------------------------
Ran 1 test in 1.000s
OK
Now let's make the test fail by requiring the timed test to finish in 0.9 seconds. To do this, simply change
maxElapsedTimeInSec = 1 + self.toleranceInSecto
maxElapsedTimeInSec = 0.9 + self.toleranceInSecRunning python ExampleTimedTest.py now results in the following output:
testOneSecondResponse (ExampleTestCase.ExampleTestCase) ... ok
TimedTest (WAITING): testOneSecondResponse (ExampleTestCase.ExampleTestCase): 1.
0 sec.
FAIL
======================================================================
FAIL: testOneSecondResponse (ExampleTestCase.ExampleTestCase)
----------------------------------------------------------------------
AssertionFailedError: Maximum elapsed time exceeded! Expected 0.95 sec., but was
1.0 sec.
----------------------------------------------------------------------
Ran 1 test in 1.000s
FAILED (failures=1)
Note that the test result for the pyUnit test case (ExampleTestCase.testOneSecondResponse) is still marked as OK, but the test result for the Timed Test is marked as FAILED, since the time it took was longer than the specified maximum time of 0.96 sec.
Let's look at an example of a LoadTest. The following code can be saved in a file called ExampleLoadTest.py:
from unittest import TestSuite, TextTestRunner
from ExampleTestCase import ExampleTestCase
from LoadTest import LoadTest
from TimedTest import TimedTest
class ExampleLoadTest:
def __init__(self):
self.toleranceInSec = 0.05
def suite(self):
s = TestSuite()
s.addTest(self.make1SecondResponseSingleUserLoadTest())
s.addTest(self.make1SecondResponseMultipleUserLoadTest())
s.addTest(self.make1SecondResponse1UserLoadIterationTest())
return s
def make1SecondResponseSingleUserLoadTest(self):
"""
Decorates a one second response time test as a single user
load test with a maximum elapsed time of 1 second
and a 0 second delay between users.
"""
users = 1
maxElapsedTimeInSec = 1 + self.toleranceInSec
testCase = ExampleTestCase("testOneSecondResponse")
loadTest = LoadTest(testCase, users)
timedTest = TimedTest(loadTest, maxElapsedTimeInSec)
return timedTest
def make1SecondResponseMultipleUserLoadTest(self):
"""
Decorates a one second response time test as a multiple-user
load test with a maximum elapsed time of 1.5
seconds and a 0 second delay between users.
"""
users = 10
maxElapsedTimeInSec = 1.5 + self.toleranceInSec
testCase = ExampleTestCase("testOneSecondResponse")
loadTest = LoadTest(testCase, users)
timedTest = TimedTest(loadTest, maxElapsedTimeInSec)
return timedTest
def make1SecondResponse1UserLoadIterationTest(self):
"""
Decorates a one second response time test as a single user
load test with 10 iterations per user, a maximum
elapsed time of 10 seconds, and a 0 second delay
between users.
"""
users = 1
iterations = 10
maxElapsedTimeInSec = 10 + self.toleranceInSec
testCase = ExampleTestCase("testOneSecondResponse");
loadTest = LoadTest(testCase, users, iterations)
timedTest = TimedTest(loadTest, maxElapsedTimeInSec)
return timedTest
if __name__ == "__main__":
TextTestRunner(verbosity=1).run(ExampleLoadTest().suite())
The 3 methods defined in ExampleLoadTest cover some of the most commonly used load test scenarios. See the doc strings at the beginning of each method for more details. Running python ExampleLoadTest.py generates this output:
.TimedTest (WAITING): LoadTest (NON-ATOMIC): ThreadedTest: testOneSecondResponse
(ExampleTestCase.ExampleTestCase): 1.03099989891 sec.
..........TimedTest (WAITING): LoadTest (NON-ATOMIC): ThreadedTest: testOneSecondResponse (ExampleTestCase.ExampleTestCase): 1.0150001049 sec.
..........TimedTest (WAITING): LoadTest (NON-ATOMIC): ThreadedTest: testOneSecondResponse (ExampleTestCase.ExampleTestCase)(repeated): 10.0 sec.
----------------------------------------------------------------------
Ran 21 tests in 12.046s
OK
This time all the tests passed. Note that the multiple user load test (make1SecondResponseMultipleUserLoadTest) runs the individual test cases in parallel, each test case in its own thread, and thus the overall time is only slighly longer than 1 second. The multiple iteration test (make1SecondResponse1UserLoadIterationTest) runs the 10 iterations of the test case sequentially, and thus the overall time is 10 seconds.
We can make some of the tests fail by increasing the value of maxElapsedTimeInSec, similar to what we did for the TimedTest.
Why should you use pyUnitPerf? Mike Clark makes a great case for using JUnitPerf here. To summarize, you use pyUnitPerf when you have an existing suite of pyUnit tests that verify the correctness of your code, and you want to isolate potential performance issues with your code.
The fact that the pyUnitPerf test suites are completely independent from the pyUnit tests helps you in scheduling different run times for the 2 types of tests:
- you want to run the pyUnit tests very often, since they (should) run fast
- you want to run the pyUnitPerf tests less frequently, when trying to verify that an identified bottleneck has been eliminated (potential bottlenecks can be pinpointed via profiling for example); performance tests tend to take a longer time to run, so they could be scheduled for example during a nightly smoke test run
1 comment:
Problem with porting patterns/api's from java straight to python is that most of the outcome feels unpythonic. I'll not go about my own feelings python vs. java here now, but I just want to point out that there's already a rather large core of hard-python users who refuse to use pyUnit because of this, and pyUnitPerf's doomed to share this fate, unless of course somebody decides along the way to make it sexy and pythonic and takes on the trouble of lowering the red-flag again which java has become to lots of folks.
Post a Comment