Titus suggested I post some of the links I keep sending him, so here they are:
Darren Hobbs talks about "Why a 20 minute build is 18 minutes too long". The build in question included acceptance tests. His conclusion is that I/O is the enemy. My take on it is that a continuous build system should run all the unit tests, plus a subset of acceptance tests that are known to run fast. The other acceptance tests, especially the ones that do a lot of I/O, can be run more leisurely, once every X hours.
Michael Hunter, aka The Braidy Tester, posts his impressions on Uncle Bob's SDWest talk on Agility and Architecture. I particularly enjoyed this point: "The first, most important part of making code flexible is to surround it with tests."
Tim Ottinger (from ButUncleBob's crew) blogs on his experience with Testing Hypothetically. It's all about seeing the big picture when writing acceptance tests, and not being bogged down in the implementation-specific details that are best left to the realm of unit testing.
Brian Marick tells us to pay attention when finishing a story. Is the system better off (more malleability is what Brian is looking for) now that the story is done?
Roy Osherove posted the first article in what I hope is a long series on "Achieving and recognizing testable software designs". He discusses a problem that I also raised in the Agile Testing tutorial at PyCon: writing good unit tests is hard. But thinking about making your system testable will go a long way toward increasing the quality of your system. And Roy proceeds with practical recipes on how to achieve this goal. Great article.
While I'm mentioning unit testing articles, here's another good one from Noel Llopis's Games from Within blog: "Making better games with TDD". I haven't seen too many articles on developing games in a TDD manner, and this one really stands out. I particularly enjoyed their discussion of when and when not to use mock objects. A lot of lessons learned in there, a lot of food for thought.
Andrew Glover writes about FIT in his IBM developerWorks article "Resolve to get FIT". Nothing too earth-shattering, but it's always nice -- at least for me -- to see tutorials/howtos on using FIT/FitNesse, since it's an area I feel it's under-represented in the testing literature (but some of us are on a mission to change that :-)
Finally (for now), via Keith Ray, here are Agile Development Conversations in the form of podcasts.
I'm thinking about periodically posting stuff like this. Of course, I could just post my del.icio.us bookmarks, but going through the articles and commenting on them a bit helps me in committing the highlights to memory.
Wednesday, March 22, 2006
Bunch O'Links on agile/testing topics
Subscribe to: Post Comments (Atom)
Modifying EC2 security groups via AWS Lambda functions
One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...
This post is a continuation of my previous one on " Running Gatling tests in Docker containers via Jenkins ". As I continued to se...
For the last month or so I've been experimenting with Rancher as the orchestration layer for Docker-based deployments. I've been pr...
Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms inte...
Grig, I think it's valuable to pull out key points and raise them for discussion rather than just assuming everyone can access the links if they want to. I'm glad you did.
>...a continuous build system should run all the unit tests, plus a subset of acceptance tests that are known to run fast.
Not sure if this is exactly what you meant, but I don't think the determining factor in whether to include a test in the build is how fast the test runs. Frankly, I don't think acceptance tests belong in the build that runs after every check-in - not because of execution speed, but because that's just too frequent for an acceptance test. It doesn't really tell you anything useful at that level. The full test suite should run a couple of times a day, definitely at end of day before everyone leaves.
>It's all about seeing the big picture when writing acceptance tests
Definitely agree here. Acceptance tests should be behavioral - they should not care about implementation at all. Implementation details are tested adequately at lower levels of detail.
>I haven't seen too many articles on developing games in a TDD manner...
Noel's article is excellent. Thanks for calling attention to it. I'm recommending it to others, too. He often has very good insights about agile development. I haven't seen many articles about test-driven game development either, but I've heard TDD is widely used in the game industry.
Glad you liked the links and the commentaries. About acceptance tests running after every check-in: why not run them, if they run fast?
I recently refactored the code for the MailOnnaStick application that Titus and I presented at our PyCon Agile Testing tutorial. I sure was glad to see that all the tests passed, including the Selenium GUI tests. It may be overkill to run *all* your acceptance tests on every check in, as we do, but I don't see any drawback in running at least those tests that complete under 2 minutes let's say.
I think you'll like this link. A nice video presentation about Fit.
It's from Rick Mugridge Rick Mugridge, the lead author of the first book on storytests: "Fit for Developing Software".
I've been thinking about your question, why not run acceptance tests with the continuous build as long as they run fast. I think the reason to run them separately from the continuous build is their scope rather than how fast they run. Some further comments are here.
Post a Comment