The answer, as far as I'm concerned, is 'BOTH'. Read these entertaining blog posts to see why: Roy Osherove's JAOO conference writeup (his take on Martin Fowler's accent cracked me up), Martin Jul's take on the pull-no-punches discussions on TDD between Roy O. and Jim Coplien, and also Martin Jul's other blog post on why acceptance tests are important.
As I said before, holistic testing is the way to go.
Subscribe to:
Post Comments (Atom)
Modifying EC2 security groups via AWS Lambda functions
One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...
-
Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms inte...
-
Update 02/26/07 -------- The link to the old httperf page wasn't working anymore. I updated it and pointed it to the new page at HP. Her...
-
I've been using dnspython lately for transferring some DNS zone files from one name server to another. I found the package extremely us...
6 comments:
We seem to be looking for the right checklist that will give us quality. The checklist at hand has TDD and acceptance testing on it and we're asked to check zero, one, or two boxes.
I think it's the wrong question.
The objective is to deliver what the customer wanted while optimizing quality; the broader goal is to create value. Quality suffers if you misunderstand what the customer wanted, or if the code has internal interactions that are difficult to foresee, or if local code violates array boundaries or uses untethered pointers. It also suffers if the interface makes it possible for the user to commit errors, or if it takes too many keystrokes to enter a small amount of common information. Testing covers only a fraction of these and does so in the least efficient way known to our industry.
Focus on quality assurance instead of testing. That means using Use Cases (which are an Agile way of engaging customers—in the sense of your full constituency, not just the sacrificial victim the company sends as the On-Site Customer) instead of XP-style user stories (so you understand feature interactions up-front), doing pair programming or desk-check code inspections, doing design by contract, driving the process with usability experts and doing good usability testing, and a host of other things.
On one hand we should be encouraged to use every weapon in our arsenal; on the other hand, we'll never get done if we try to do everything. It then becomes a matter of cost effectiveness. Integration and system testing have long been demonstrated to be the least efficient way of finding bugs. Recent studies (Siniaalto and Abrahamsson) of TDD show that it may have no benefits over traditional test-last development and that in some cases has deteriorated the code and that it has other alarming (their word) effects. The one that worries me the most is that it deteriorates the architecture. And acceptance testing is orders of magnitude less efficient than good old-fashioned code inspections, extremely expensive, and comes too late to keep the design clean.
Jef Raskin says that the "interface is the program." If we're going to do testing, that's where we should focus it. The interface is our link to the end user. Agile is about engaging that user; TDD, about engaging the code. TDD is a methodology. TDD is about tools and processes over people and interactions. Woops. Agile pundits go so far as to marginalize that aspect of software development (read the very first and foremost point of the araticulated Philosophy).
Sorting these things out requires a system view. It used to be called "systems engineering," but the New Age Agile movements consistently move away from such practices. It shows in the quality of our interfaces, in the incredible code bloat we have today relative to two decades ago, and in the fact that we keep getting stuck in practices that look good from a simple analysis, but which fall apart when you look at them closer. Pair programming has been getting mixed results (summarization by Abrahamsson); TDD is under attack (North; Siniaalto and Abrahamsson; Bjørnvig, Coplien and Harrison); on-site customer has also been at least partially discredited (Martin and Noble).
The answer, I think, is to focus on Quality and to find the techniques that work best and are the most cost-effective for your project. Just answering "both" to the "TDD-or-acceptance" question is not only at the wrong scope, it is missing the whole point.
It's about thinking, not about checklists. I think that one root of the problem lies in modern education, which has increased its focus on technique and reduced its focus on thinking. I also think that because of a hot job market more and more people are getting into the industry before having had enough exposure to academia to properly develop their thinking tools. Fewer and fewer job applicants have Masters' degrees; fewer and fewer students in Denmark wait to finish even their undergraduate before succumbing to the lure of money and their chance to make a mark on the world. Also, fewer and fewer programmers understand software history. Not knowing design-by-contract or Clean Room, they never know to even ask the question of whether TDD can be as effective as they can in exactly the same improvements to code.
I'm terribly sorry that this post did not give you a checklist.
Jim -- thanks for taking the time to comment on my post. I stand by my assertion that any project needs automated tests at the unit level (be it TDD or not) and at the acceptance/functional level. Not sure how the people you quote got their statistics, but I worked for 5 years in QA and the worst bugs I've seen were discovered during the integration/system testing phase. I've also seen many problems that could have been prevented by automated unit tests. In general, I don't know how you can do proper regression testing without automated tests at all levels.
I agree with your assertion that people need to spend more time developing their thinking, but I don't think academia is necessarily the place for it. Especially when it comes to software engineering, academia still insists on waterfall and BDUF. Agile methodologies are rarely, if ever, mentioned. Professors who teach software engineering classes almost never take the time to do real programming themselves, on a real project that has uses in the real world.
Grig
Hi, again, Grig,
I've expanded on my response at my own 'blog on Artima. There, I've fleshed out some of the references that back up my claims.
I'm not saying we should discard automated check-in tests. But I might suggest that those tests be system tests written by testing professionals rather than by the coders. And I might suggest that the most important problems evade even testing in the sense that you use the term here. How can a program evaluate the effiiency or humaneness of an interface, or measure how often it leads users to erroneous input? How can tests written by developers, during implementation, close the loop between what the program does and what the client wants? These are the serious and interesting problems. I am rather indifferent about what you want to do for the simple problems, because simple problems benefit from just about any attention you give them.
As for your indictment of the dysfunction in academia: 150% agreement. Thanks for pointing out those problems here. That's the problem I'm discussing in my own 'blog. I hope you come and visit and contribute.
Grig,
So what do you do when you're a QA person and there are no unit tests and no history of test automation at all? That's my situation...I'm fairly familiar with test automation - PyUnit, Ruby-Watir, JUnit, a little Selenium...my solution, since the culture here isn't to do automated testing, is to try to get the developers to at least create me a test api, and then build a suite of smoke tests that run at the business logic layer. By making them available to the developers to run as they build, I am hoping it will inspire them to see the value of automated testing and "get religion". Since it is a pretty big project and I'm the only QA person, my time is pretty crunched so its not like I can devote myself full-time to automated testing...but this seemed like a decent incremental first step. Any thoughts?
Jim K. -- I think that's a great first step, especially if you include the smoke test suite in a continuous integration system, so that people can get very rapid feedback on how the build is doing in terms of the smoke test.
I found it's harder to convince developers to start writing unit tests -- they can see it as a waste of their time. But if they see that the smoke tests catch regression bugs, maybe they'll start changing....
Good luck, it's hard to introduce a culture of automated testing into an organization that doesn't see its value.
Grig
Jim K - What do you mean by "test api"?
Sounds like that you end up testing that API rather than the software behind it, which actually is what you are after .. or am I missing something?
Post a Comment