Tuesday 18 February 2014

Maturity of testing process and all that jazzz

Today I spent some time discussing this thing that the other person called "Holistic Testing Maturity model". Now, I probably didn't understand everything he was saying properly, but that got me thinking, and I want to share. I'm not saying that I present that other person's position true and fair, it was just a starting point for my thoughts.

What I got from the conversation is the idea that you want to have a separate testing process maturity model, that only talks about finding and reporting bugs, but not about dealing with them. So you can be very mature according to this model if testing team on your project is all shiny - but at the same time you may have an awful product because you don't actually do much with the information your shiny testers provide you. You will still be able to claim you have a mature testing process. *facepalm*

Now, maybe I spent too much time as a test lead, business analyst and product owner (yep, I wasn't always hands-on testing), but that just doesn't make any fracking sense to me. Excuse my French. Of course it is nice to have a good process of finding and reporting bugs, but what goal do we achieve with that? To have a good testing? But (a) it's not a goal (it's means), and (b) this is not a good testing. At least not what I call good testing.

Thing is, testing doesn't exist in vacuum. It exists to add value for the overall product: to provide information that helps to make product (and process of creating that product) better. If that information is not being used - why should it even exist? If you create perfect bugs, but they stay forever unfixed - how can this be called a mature testing process? Testing isn't about finding bugs, it's about finding useful information (part of that information are bug reports, surely), so if we cross out the word "useful", that's a crappy testing process from my point of view.

Be sure, I am not saying that product quality is a responsibility of solely testing team. Neither am I saying that having a nice and clear process of getting to a point where information is provided (but not yet used) is a bad thing. What I'm saying is: a process should be there for a reason, so if improvements on the process don't bring value to the end product, they are kind of useless.

To be even more clear I'll just list few examples from the top of my head of what I consider a valid reason/goal for an improvement of testing process:

  • to test more features in breadth and depth at the same (as currently) amount of time (either less work hours or more value in the same total number of hours - reduces cost of testing -> valid business goal);
  • to involve testing team on the earliest reasonable stage (reduces overall time needed to complete a project because some stuff is being done in parallel + the earlier problem is found, the cheaper it is to fix -> valid business goal);
  • to improve reporting on test results (gives product owner and developers necessary information faster and in better shape -> enables product owner to make business decisions in time + enables developers to fix issues faster -> saves everyone time and money, valid business goal);
  • to reduce the time for onboarding new team members (less time means project can start benefiting from new addition faster; and it also improves stability of the team, which enables product owner to make proper delivery estimations -> valid business goal);
  • and so on and so forth.


It should all trace back to business value, to a reason why we create the product at all (and no, it's not to "make money" - it's to solver problems our clients have in a timely manner and with satisfying quality). Think of it as of agile vertical stake: until the feature is released, nobody cares how much effort you spent on it. If it's not released, you don't get payed. If testing didn't bring value - it doesn't matter how shiny it is. You don't want to do a process for the sake of doing a process (and if you think I'm wrong, you've probably been ISTQBfied).

And if you are making a maturity model for testing that only goes until the bug is reported - awesome, just don't call it "testing maturity model", call it "maturity model of reporting bugs" (and think why do you even want a separate maturity model for that). Because that's what it's about: reporting bugs. Testing goes beyond that. A good tester enables the team to make a better product. Mature testing process would consider at the very least:
- keeping good (bug not excessive) testing documentation (from notes, to test strategy, to bugs, to test results);
- having a process of onboarding new test team members;
- reporting bugs;
- following up with the bugs (triaging/estimating/confirming the fix/updating test data and test ideas list/etc.);
- acceptance testing: requirements and conditions on which the pass is granted, and clarification on what the "acceptance testing passed" would mean;
- approach to testing (when do we start testing, what do we do, how do we do it, when do we stop doing that, how we report on results and what do we do after);
- team collaboration stuff (by what means, with who and how often do we exchange information);
- what is the goal of having testing on the product at all.

So, just to reiterate, from my point of view a mature testing process eliminates the situation when nothing is being done with the results of testing. If nothing is being done - you don't need to spend testers' time on the project, you already know that it's crap.

No comments:

Post a Comment