Most of what we do involves some iterative testing as we code. Write a few lines — test to make sure it compiles. Compile a few times — test based on your understanding of the desired functionality. In everything we do, there are little testing steps built in. Most of these have become second nature and they are not thought of as testing at all, but rather part of coding.
A year or two ago, Test Driven Development (TDD) began gaining favor as hordes of teams scrambled to get hip with agile techniques and practices. TDD is a concept where you write your tests first, then code until your tests pass. This is generally accepted as a solid way to provide code with many fewer bugs than traditional approaches to building software. I do believe that it is always better to know the tests you are going to perform and the expected outcome before you write the first line of code. This philosophy goes back to my first job out of college in management consulting when I worked for one of the big 5 (there were 5 at the time if that dates me). A common first assignment was to write pages and pages of test scripts. Reams of paper fell victim to the relatively useless test scripts that were cranked out by the scores of new employees who went through the 6 week training course.
Now I want to clarify that I think test scripts and TDD in general are both good things. However, I think that the most important aspect of crafting your tests is the person who is doing it. To take a newly hired employee and ask them to write test scripts is ludicrous in my opinion. In order to write test scripts that have value based representative coverage you need to know the business. While this may be possible for a simple NorthWind order placement function, most true business problems are not nearly that trivial. I work in the oil and gas industry, and in order to predict what the software should do in a given situation, it is essential that you understand what the underlying functional equations are. Not only that, but you also must understand what types of scenarios are common, and distinguish a rare occurrence from something that should or could never happen. Without this understanding, you could spend a large portion of your time covering test cases that don’t make any sense.
I would also like to take my analysis of what makes a good tester a bit further and say that the less they know about how the software works, the better. Every operation that is understood in advance by the person creating the test scripts is one that may consciously or subconsciously influence their thought processes while creating the test. The goal is not to have good code coverage with your tests, but good business coverage.
We all want our code to be used and appreciated by the end user. So to share my opinion on what will be the best use of resources to achieve a good quality assurance cycle, I would recommend that all projects keep these points in mind:
- Always know, in specific detail, the expected result of your software
- Be sure to know how to differentiate a common case from a valid case from a ridiculous case
- Employ a tester with a functional background, not a technical one
- Try to have 3 different people responsible for design, development, and testing
- Create a standard set of regression tests… and use them
Hopefully you may find these simple guidelines useful. I have seen people fall into situations where they spend a great deal of time maintaining NUnit test cases. I am a proponent of this, even though it does require commitment and introduces some additional overhead to your project. Where you get into trouble is when you add 75% more time onto your project to build and maintain test cases that are not representative of the business. Before you invest time in covering your code with tests, make sure you are covering it with the right stuff.