Friday, September 19, 2008
Presubmit testing at Google
Here is an interesting blog post from Marc Kaplan, test engineering manager at Google, on their strategy of running what they call 'presubmit tests' -- tests that are run automatically before the code gets checked in. They include performance tests, and they compare the performance of the new code with baselines from the previous week, then report back nice graphs showing the delta. Very cool.
Subscribe to: Post Comments (Atom)
Modifying EC2 security groups via AWS Lambda functions
One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...
This post is a continuation of my previous one on " Running Gatling tests in Docker containers via Jenkins ". As I continued to se...
For the last month or so I've been experimenting with Rancher as the orchestration layer for Docker-based deployments. I've been pr...
Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms inte...
We do something similar in my group at Nuance. There all the tests must pass before you can check in, including the performance tests (takes about 30-40min on a slow single proc machine). Usually any change you make affects at least one test baseline so its not a onerous as it sounds.
The accuracy tests are a bit different as they run on the grid and take much longer. Those are run for releases (about 4 a month).
We have gotten some flack in the past about being so strict about our testing and commit policies. Every so often we are forced to give an 'under the table' release to a researcher, and without exception this had come back to haunt them. Not sure why they keep thinking its a good idea...
Post a Comment