Should acceptance tests talk to the database?

There's an ongoing discussion on the FitNesse mailing list about whether acceptance tests should talk to a "real" database or should instead use mock objects to test the business rules. To me, acceptance tests, especially the ones written in Fit/FitNesse, are customer-facing (a term coined I believe by Brian Marick), and hence they need to give customers a warm and fuzzy feeling that they're actually proving something about the system as a whole. The closer the environment is to the customer's production environment, the better in this case. Here's my initial message to the list:

I would say that since you're doing *acceptance* testing, it doesn't make too much sense to mock the database. Mocking makes more sense in a unit testing situation. But for acceptance testing, the way I see it is to mimic the customer environment as much as possible, and that involves setting up a test database which is as similar as possible to the production database. Of course, if you have a huge production database, it is not optimal to have the same sized test database, so what I'd do is extract datasets with certain characteristics from the production db and populate the test db with them. I'd choose corner cases, etc.

A couple of other people replied to the initial post and said they think that "pure business rules" should also be exercised by mocking the backend and testing the application against data that can come from any source. While I think that this is a good idea, I see its benefit mainly in an overall improvement of the code. If you think about testing the business rules independent of the actual backend, you will have to refactor your code so that you encapsulate the backend-specific functionality as tightly as possible. This results in better code which is more robust and more testable. However, this is a developer-facing issue, not necessarily a customer-facing one. The customers don't necessarily care, as far as their acceptance tests are concerned, how you organize your code. They care that the application does what they want it to do. In this case, "end-to-end" testing that involves a real database is I think necessary.

In this context, Janniche Haugen pointed me to a great article written by David Chelimsky: Fostering Credibility In Customer Tests. David talks about the confusion experienced by customers when they don't know exactly what the tests do, especially when there's a non-obvious configuration option which can switch from a "real database" test to a "fake in-memory database" test. Maybe what's needed in this case is to label the tests in big bold letters on the FitNesse wiki pages: "END TO END TESTS AGAINST A LIVE DATABASE" vs. "PURE BUSINESS LOGIC TESTS AGAINST MOCK OBJECTS". At least everybody would be on the same (wiki) page.

My conclusion is that for optimal results, you need to do both kinds of testing: at the pure business rule level and at the end-to-end level. The first type of tests will improve your code, while the second one will keep your customers happy. For customer-specific Fit/FitNesse acceptance tests I would concentrate on the end-to-end tests.


Jerrad Anderson said…
Acceptance testing is the acceptance of a system. System testing shouldn't involve mocking objects but a system that closely resembles the delievered application.
Bruce Leggett said…
Mocking objects shouldn't be used in acceptance tests or unit tests period. Mocking has many issues with testing object interactions. Read my blog "A Mockery of Mock Objects"
Grig Gheorghiu said…

I agree with you that acceptance tests should be as close to 'production' as possible, although in my experience it is worth sometimes to create a repeatable environment, where you can verify things that you wouldn't be able to verify in a random, production-like environment. And to do this, you need to mock certain things.

As for unit tests, I think mocks are useful especially for simulating error conditions and exceptions that would be otherwise close to impossible to simulate.

Thanks for the link to your article, I already had it bookmarked :-)
Bruce Leggett said…

For any situation where you feel that you "need" a mock object, I can show you how to reproduce that situation without a mock object. Using mock objects incrementaly breaks the integrity of your tests. Testing a class in complete isolation is nonsensical when that class has any collaborations. Mocking the collaborations leads to many ramifications especially when those collaborations get modified or refactored. I stand firm against the use of mock objects based on years of experience with them and their misuse.

Kind Regards,
Bruce Leggett
Grig Gheorghiu said…

I don't agree with you. The example you gave on your blog (object B calling a method of object A) is a bit simplistic IMO.

Let's take another example: suppose your object B uses another object A, where A encapsulates database access. Let's say A implements a select method, which actually does a select into a database and returns a list of values.

Let's also assume B calls at some point. We can also assume that B does something with the list returned by When you test B, you want to test its internal logic, i.e. that it really does that *something* with the list. That's why it's sometimes helpful to mock A and return a canned list -- because to test B, you don't necessarily care *how* you obtained that value.

You will say: what if we change/refactor Well, B's test will still verify B's internal behavior, using the canned value. A's unit tests should catch any changes/refactorings specific to A.

B's unit tests will still pass, it's true. I don't necessarily see this as a problem, since hopefully you have acceptance tests that will catch the change in the interaction between A and B.
Bruce Leggett said…

I still make a mockery of mock objects. I do not believe the dogma that mock object framework creators lead you to believe. I think out-side the box. Mock objects are prone to many avenues of failure, and can be easily misused.

The delimma in your thinking is that you want the acceptance tests to detect defects in object interactions which is IMO too late in the game. More desirably, you would want to catch errors in object interactions/integrations during the "Daily Build and Smoke Test", or Continuous Integration.

If you depend on acceptance tests to catch those errors, then an entire iteration could go by without catching interaction errors. That could potentially be thrity business days! By that point, you have a problem. Because now you have thirty days worth of code to sift through intead of one day's worth. Furthermore, because the errors aren't identifiable during the running of unit tests anymore, now, you have to debug the code manually. Or, if you are lucky, the runtime will catch the error for you when you run your acceptance test scripts (again, late bug detection).

Testing interaction errors early is one point. But, you are right "refactoring, dependecies, and interactions" is another one of my points. Mock objects become problematic when you are sprinkling mock objects all over the place to fake object interactions. In Agile development, refactor mercilessly is one of the rules, right? So, then, what happens when you refactor to a simpler more elegant solution, but your unit tests fail to recognize changes in object interactions?

With that said, let's look back at your example. So, object B depends on object A's select method. Object returns a list. Let's assume that the list returned is a list of strings. You will mock the select method to return a list of strings. Now object B, whenever it calls, object A's select method, will always get back mocked object A's list of strings. Now, hypothetically the list returned by object A holds generic objects. Later, you decide that only integers will be returned, so you modify a to return a list a integers. Additionally, object B is expected strings, and it does that *something* with strings not integers. However, object B's unit test has not changed, BUT it still passes! It passes because it is still mocks to return a list of strings. I believe you can see the problem at this point.

To elaborate a little more on the delimma stated above, if you would have not used a mock object any problems with an interaction would have been immediately detected when you ran all your unit tests. That is the way we do things here, everyone here understands why, plus enjoys the benifits of automating the testing of *true* object interactions and regression testing. Using mock objects is like telling something that is half true.

I really enjoy are discussion. Also, I like your article: Thoughts on giving a successful talk I am going to be giving a technical speach on Design Patterns and the Cairngorm microarcheticture next week, so that was right down my ally.

Grig Gheorghiu said…
Bruce -- thanks for your time in this discussion, which I also find interesting and enjoyable.

I guess it comes back to how you define a unit test. The way you do it, you're not really testing object B in isolation, but you're also testing the interaction between A and B. I call that integration testing.

And I do emphasize the fact that unit tests are not enough, they can't be the only weapon in your testing arsenal. You can have very nasty suprises if you only rely on unit tests, and don't do any acceptance or system integration/black box kind of testing.

Example: I'm coordinating the Pybots project ( which is a continuous integration buildbot farm, testing various Python projects with Python binaries/libs built from the very latest Python SVN trunk. It happened after a certain Python checkin that no packages could be installed any more. However, ALL of the unit tests for Python itself were passing with flying colors! Conclusion: always test your application from the outside in (black box), as well as from the inside out (white box).

Glad you found my post on how to give a technical talk useful.

Bruce Leggett said…

Yeah, we do not test in "isolation". The entry point of our tests start from the class being tested and then follows all the internal paths however many there may be for that test.

Black box testing is always part of our development. You are right; sometimes white box testing does not catch everything (occasionaly contingincies are overlooked), so, black box tests are used to finalize the quality of the release.

I'm going to finish reading your article about speaking. Good luck with your Python project.
Stuart said…
I read your post and the comments with interest. As a software test practitioner i find some of the terms used within Agile software development amusing. People talk of acceptance testing when they mean integration in the large, but back to your post.

If you just mock up a data base then you don't understand your data. You don't need volume but you do need to identify all the record types you will have to deal with.

BTW in this context what is a "corner case"? Do you mean edge case or is this another Agile test term that's emerged?
Hello :) . How do you work with database data for each test? When each selenium java tests finish I tried to recover the initial state of database test data. What do you think? I have some problems to do this in my arquitecture but I don't know if I'm in the right way.

Grig Gheorghiu said…
Claudio -- check out the comments in this blog post:

Thanks Grig, I already started to try some of these solutions.

Popular posts from this blog

Performance vs. load vs. stress testing

Running Gatling load tests in Docker containers via Jenkins

Dynamic DNS updates with nsupdate and BIND 9