There's an ongoing discussion on the FitNesse mailing list about whether acceptance tests should talk to a "real" database or should instead use mock objects to test the business rules. To me, acceptance tests, especially the ones written in Fit/FitNesse, are customer-facing (a term coined I believe by Brian Marick), and hence they need to give customers a warm and fuzzy feeling that they're actually proving something about the system as a whole. The closer the environment is to the customer's production environment, the better in this case. Here's my initial message to the list:
I would say that since you're doing *acceptance* testing, it doesn't make too much sense to mock the database. Mocking makes more sense in a unit testing situation. But for acceptance testing, the way I see it is to mimic the customer environment as much as possible, and that involves setting up a test database which is as similar as possible to the production database. Of course, if you have a huge production database, it is not optimal to have the same sized test database, so what I'd do is extract datasets with certain characteristics from the production db and populate the test db with them. I'd choose corner cases, etc.
A couple of other people replied to the initial post and said they think that "pure business rules" should also be exercised by mocking the backend and testing the application against data that can come from any source. While I think that this is a good idea, I see its benefit mainly in an overall improvement of the code. If you think about testing the business rules independent of the actual backend, you will have to refactor your code so that you encapsulate the backend-specific functionality as tightly as possible. This results in better code which is more robust and more testable. However, this is a developer-facing issue, not necessarily a customer-facing one. The customers don't necessarily care, as far as their acceptance tests are concerned, how you organize your code. They care that the application does what they want it to do. In this case, "end-to-end" testing that involves a real database is I think necessary.
In this context, Janniche Haugen pointed me to a great article written by David Chelimsky: Fostering Credibility In Customer Tests. David talks about the confusion experienced by customers when they don't know exactly what the tests do, especially when there's a non-obvious configuration option which can switch from a "real database" test to a "fake in-memory database" test. Maybe what's needed in this case is to label the tests in big bold letters on the FitNesse wiki pages: "END TO END TESTS AGAINST A LIVE DATABASE" vs. "PURE BUSINESS LOGIC TESTS AGAINST MOCK OBJECTS". At least everybody would be on the same (wiki) page.
My conclusion is that for optimal results, you need to do both kinds of testing: at the pure business rule level and at the end-to-end level. The first type of tests will improve your code, while the second one will keep your customers happy. For customer-specific Fit/FitNesse acceptance tests I would concentrate on the end-to-end tests.
Subscribe to:
Post Comments (Atom)
Modifying EC2 security groups via AWS Lambda functions
One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...
-
Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms inte...
-
Update 02/26/07 -------- The link to the old httperf page wasn't working anymore. I updated it and pointed it to the new page at HP. Her...
-
I've been using dnspython lately for transferring some DNS zone files from one name server to another. I found the package extremely us...
8 comments:
Acceptance testing is the acceptance of a system. System testing shouldn't involve mocking objects but a system that closely resembles the delievered application.
Bruce,
I agree with you that acceptance tests should be as close to 'production' as possible, although in my experience it is worth sometimes to create a repeatable environment, where you can verify things that you wouldn't be able to verify in a random, production-like environment. And to do this, you need to mock certain things.
As for unit tests, I think mocks are useful especially for simulating error conditions and exceptions that would be otherwise close to impossible to simulate.
Thanks for the link to your article, I already had it bookmarked :-)
Bruce,
I don't agree with you. The example you gave on your blog (object B calling a method of object A) is a bit simplistic IMO.
Let's take another example: suppose your object B uses another object A, where A encapsulates database access. Let's say A implements a select method, which actually does a select into a database and returns a list of values.
Let's also assume B calls A.select at some point. We can also assume that B does something with the list returned by A.select. When you test B, you want to test its internal logic, i.e. that it really does that *something* with the list. That's why it's sometimes helpful to mock A and return a canned list -- because to test B, you don't necessarily care *how* you obtained that value.
You will say: what if we change/refactor A.select? Well, B's test will still verify B's internal behavior, using the canned value. A's unit tests should catch any changes/refactorings specific to A.
B's unit tests will still pass, it's true. I don't necessarily see this as a problem, since hopefully you have acceptance tests that will catch the change in the interaction between A and B.
Bruce -- thanks for your time in this discussion, which I also find interesting and enjoyable.
I guess it comes back to how you define a unit test. The way you do it, you're not really testing object B in isolation, but you're also testing the interaction between A and B. I call that integration testing.
And I do emphasize the fact that unit tests are not enough, they can't be the only weapon in your testing arsenal. You can have very nasty suprises if you only rely on unit tests, and don't do any acceptance or system integration/black box kind of testing.
Example: I'm coordinating the Pybots project (www.pybots.org) which is a continuous integration buildbot farm, testing various Python projects with Python binaries/libs built from the very latest Python SVN trunk. It happened after a certain Python checkin that no packages could be installed any more. However, ALL of the unit tests for Python itself were passing with flying colors! Conclusion: always test your application from the outside in (black box), as well as from the inside out (white box).
Glad you found my post on how to give a technical talk useful.
Grig
I read your post and the comments with interest. As a software test practitioner i find some of the terms used within Agile software development amusing. People talk of acceptance testing when they mean integration in the large, but back to your post.
If you just mock up a data base then you don't understand your data. You don't need volume but you do need to identify all the record types you will have to deal with.
BTW in this context what is a "corner case"? Do you mean edge case or is this another Agile test term that's emerged?
Hello :) . How do you work with database data for each test? When each selenium java tests finish I tried to recover the initial state of database test data. What do you think? I have some problems to do this in my arquitecture but I don't know if I'm in the right way.
Thanks
Claudio -- check out the comments in this blog post:
http://ivory.idyll.org/blog/jan-08/postgresql-test-fixtures.html
Grig
Thanks Grig, I already started to try some of these solutions.
Best
Post a Comment