There's an ongoing discussion on the FitNesse mailing list about whether acceptance tests should talk to a "real" database or should instead use mock objects to test the business rules. To me, acceptance tests, especially the ones written in Fit/FitNesse, are customer-facing (a term coined I believe by Brian Marick), and hence they need to give customers a warm and fuzzy feeling that they're actually proving something about the system as a whole. The closer the environment is to the customer's production environment, the better in this case. Here's my initial message to the list:
I would say that since you're doing *acceptance* testing, it doesn't make too much sense to mock the database. Mocking makes more sense in a unit testing situation. But for acceptance testing, the way I see it is to mimic the customer environment as much as possible, and that involves setting up a test database which is as similar as possible to the production database. Of course, if you have a huge production database, it is not optimal to have the same sized test database, so what I'd do is extract datasets with certain characteristics from the production db and populate the test db with them. I'd choose corner cases, etc.
A couple of other people replied to the initial post and said they think that "pure business rules" should also be exercised by mocking the backend and testing the application against data that can come from any source. While I think that this is a good idea, I see its benefit mainly in an overall improvement of the code. If you think about testing the business rules independent of the actual backend, you will have to refactor your code so that you encapsulate the backend-specific functionality as tightly as possible. This results in better code which is more robust and more testable. However, this is a developer-facing issue, not necessarily a customer-facing one. The customers don't necessarily care, as far as their acceptance tests are concerned, how you organize your code. They care that the application does what they want it to do. In this case, "end-to-end" testing that involves a real database is I think necessary.
In this context, Janniche Haugen pointed me to a great article written by David Chelimsky: Fostering Credibility In Customer Tests. David talks about the confusion experienced by customers when they don't know exactly what the tests do, especially when there's a non-obvious configuration option which can switch from a "real database" test to a "fake in-memory database" test. Maybe what's needed in this case is to label the tests in big bold letters on the FitNesse wiki pages: "END TO END TESTS AGAINST A LIVE DATABASE" vs. "PURE BUSINESS LOGIC TESTS AGAINST MOCK OBJECTS". At least everybody would be on the same (wiki) page.
My conclusion is that for optimal results, you need to do both kinds of testing: at the pure business rule level and at the end-to-end level. The first type of tests will improve your code, while the second one will keep your customers happy. For customer-specific Fit/FitNesse acceptance tests I would concentrate on the end-to-end tests.