How misunderstanding negative testing may lead to useless testing design (and untruthful traceability matrix!)
Suppose you have a system with three statuses: A, B and C.
Suppose you have a requirement (let’s say Req. 1) which states that, if the system status goes from A to B, then action X happens.
Suppose you are a tester and you want to verify that the Req. 1 has been properly implemented into the system under test.
You might write a test case to verify that, when the system status goes from A to B, action X actually happens. So far so good.
You might also write a negative test case to verify that, when the system status goes, for instance, from C to B, action X does not happen. Great!
What you should never write is a (just apparently negative) test case to verify that, when the system status goes from C to B, action Y happens.
This would not be a negative test: this would be just useless testing, because action Y has nothing to do with action X, even less with Req. 1.
What’s more, this astonishing example must make many test managers shiver, as it reveals that perhaps the traceability matrix is not the best option to assure how well your requirements are covered or how properly your system has been implemented.
What’s the point of having an enviable and shining traceability matrix, showing that 100% of your requirements are perfectly covered by… an endless series of totally useless test cases?!?
I know, you may believe I’m exaggerating. In fact I am, in a way, but maybe not so much if you take into account that the astonishing example mentioned before, pulled out of real working life, is just a simplification of what really happened in a slightly more complex scenario with a few more system statuses (in addition to A, B and C): seven useless test cases in a row! Unbelievable, right?
To sum up, I would make a plea to the test managers: before showing a satisfactory traceability matrix —I would even say before anything else— I think you should assure you can rely on your testers. Can you?