I've been meaning to blog about this article, to recommend it to everyone starting out with TDD (but since you get a lot more traffic than I do, why not write it here will people will actually see it?).
You really did a fine job of capturing the common gotchas, and giving good guidance for how to approach writing your tests. A lot of other sources tell you "what" and "why", but its tough to find a good source on "how". This is it. Thanks a lot!
I was expecting something more than a test for a sum, there are many people that talk about TDD, but very few show actual code for more complicated scenarios, specially when using data (databases, xml, whatever files, etc), or how you separate the test code from the production code, etc
congratulations Roy! It's definitely on my reading list in the near coming days..
Roy you mention in your article that you should "Avoid Dependencies Between Tests". I sometimes call one test from another to ensure that data/state is setup for this test to run. It does require that the other test is run before hand, but I guess it does create a dependency between the two.
What do you think is a better alternative to this? Like a database setup and tear down scripts for the test suite? Or to use the setup teardown sections of the test to seutp this data?
Great thanks for the tips Roy
Great article Roy (as usual) :)
One thing I was thinking about, on multiple asserts. I mostly agree with the single assert per test, but sometimes I feel that we're just not using multiple asserts because the framework simply doesn't support that, and no other "best practice" reason.
Why not supporting code like:
AssertGroup agroup= new AssertGroup();
Could this reduce some test code? Probably. Is it a good ideia? Don't know yet.
Another possible conclusion is that, currently, the feedback we get from tests is very limited (to a failed single assert).