Tuesday, 18 February 2014

Maturity of testing process and all that jazzz

Today I spent some time discussing this thing that the other person called "Holistic Testing Maturity model". Now, I probably didn't understand everything he was saying properly, but that got me thinking, and I want to share. I'm not saying that I present that other person's position true and fair, it was just a starting point for my thoughts.

What I got from the conversation is the idea that you want to have a separate testing process maturity model, that only talks about finding and reporting bugs, but not about dealing with them. So you can be very mature according to this model if testing team on your project is all shiny - but at the same time you may have an awful product because you don't actually do much with the information your shiny testers provide you. You will still be able to claim you have a mature testing process. *facepalm*

Now, maybe I spent too much time as a test lead, business analyst and product owner (yep, I wasn't always hands-on testing), but that just doesn't make any fracking sense to me. Excuse my French. Of course it is nice to have a good process of finding and reporting bugs, but what goal do we achieve with that? To have a good testing? But (a) it's not a goal (it's means), and (b) this is not a good testing. At least not what I call good testing.

Thing is, testing doesn't exist in vacuum. It exists to add value for the overall product: to provide information that helps to make product (and process of creating that product) better. If that information is not being used - why should it even exist? If you create perfect bugs, but they stay forever unfixed - how can this be called a mature testing process? Testing isn't about finding bugs, it's about finding useful information (part of that information are bug reports, surely), so if we cross out the word "useful", that's a crappy testing process from my point of view.

Be sure, I am not saying that product quality is a responsibility of solely testing team. Neither am I saying that having a nice and clear process of getting to a point where information is provided (but not yet used) is a bad thing. What I'm saying is: a process should be there for a reason, so if improvements on the process don't bring value to the end product, they are kind of useless.

To be even more clear I'll just list few examples from the top of my head of what I consider a valid reason/goal for an improvement of testing process:

  • to test more features in breadth and depth at the same (as currently) amount of time (either less work hours or more value in the same total number of hours - reduces cost of testing -> valid business goal);
  • to involve testing team on the earliest reasonable stage (reduces overall time needed to complete a project because some stuff is being done in parallel + the earlier problem is found, the cheaper it is to fix -> valid business goal);
  • to improve reporting on test results (gives product owner and developers necessary information faster and in better shape -> enables product owner to make business decisions in time + enables developers to fix issues faster -> saves everyone time and money, valid business goal);
  • to reduce the time for onboarding new team members (less time means project can start benefiting from new addition faster; and it also improves stability of the team, which enables product owner to make proper delivery estimations -> valid business goal);
  • and so on and so forth.


It should all trace back to business value, to a reason why we create the product at all (and no, it's not to "make money" - it's to solver problems our clients have in a timely manner and with satisfying quality). Think of it as of agile vertical stake: until the feature is released, nobody cares how much effort you spent on it. If it's not released, you don't get payed. If testing didn't bring value - it doesn't matter how shiny it is. You don't want to do a process for the sake of doing a process (and if you think I'm wrong, you've probably been ISTQBfied).

And if you are making a maturity model for testing that only goes until the bug is reported - awesome, just don't call it "testing maturity model", call it "maturity model of reporting bugs" (and think why do you even want a separate maturity model for that). Because that's what it's about: reporting bugs. Testing goes beyond that. A good tester enables the team to make a better product. Mature testing process would consider at the very least:
- keeping good (bug not excessive) testing documentation (from notes, to test strategy, to bugs, to test results);
- having a process of onboarding new test team members;
- reporting bugs;
- following up with the bugs (triaging/estimating/confirming the fix/updating test data and test ideas list/etc.);
- acceptance testing: requirements and conditions on which the pass is granted, and clarification on what the "acceptance testing passed" would mean;
- approach to testing (when do we start testing, what do we do, how do we do it, when do we stop doing that, how we report on results and what do we do after);
- team collaboration stuff (by what means, with who and how often do we exchange information);
- what is the goal of having testing on the product at all.

So, just to reiterate, from my point of view a mature testing process eliminates the situation when nothing is being done with the results of testing. If nothing is being done - you don't need to spend testers' time on the project, you already know that it's crap.

Thursday, 13 February 2014

Exploratory testing is about exploration - unedited version of article for "Trapeze"

(That's what I initially sent to the journal. It got quite edited and rewritten on the way. I still like initial version better, so here it is.
Edited version can be found here: http://www.testingcircus.com/testing-trapeze-2014-february-edition/)

Exploratory testing seems to become more and more of a a big thing lately, which fills my heart with joy, but in the recent years I realized that people mean completely different things when they talk about it. Sometimes this difference is amusing, sometimes it's enriching and eye-opening,  and sometimes it's annoying (e.g. when your colleagues refuse to try something new, claiming that they are already "doing this exploratory testing thing"). I don't claim to have an ultimate answer, of course, but I'd like to talk about what I mean under "exploratory testing", and why I love it so much.

For me, exploratory testing is all about the idea of exploration. It's not about using charts and session-based testing, it's not about agile environment, and it's definitely not about some list of heuristics (becoming the new "best practices") you absolutely must use. It's about asking questions, experimenting and gaining knowledge about the world (and the software under test in particular). And as a good explorer, of course you are also bound to keep good notes of your deeds. I like to think about it as if I were the Curiosity rover. Or even better, a crew member of the starship Enterprise (LLAP to all you fellow geeks out there): exploring brand new worlds and happily startrekking across the universe. Never knowing all of it, but having the tools, the desire and the attitude to acquire new knowledge.

As a tester, I have knowledge and assumptions about the software I'm about to test: business requirements, technology description, knowledge of the environment, common sense, etc.. I also have a map, based on my knowledge: test cases, test strategy, process I'm about to follow. And of course I have many tools to help me on my job: from special software to heuristics and ol' good test analysis techniques. All of that gives me a good place to start, but if I let it to define what I do, my job will become mechanical and it really wouldn't need the power that is human brain that much. When I'm doing exploratory testing (and I'm always doing exploratory testing), I have to keep asking questions and remember to readjust my assumptions. It absolutely blows my mind how much you can gain from such a simple idea!

First of all, it gives you efficiency, because by asking questions you gain understanding of software, you make sure you don't use outdated documentation, you get to know people on a team and what they can do, you are providing fast feedback, and you are helping everyone on the team to do better job with having more up-to-date information about the project than they would otherwise have. It also makes the job more fun, which you can't underestimate if you get bored as easily as I do. Another thing exploratory testing encourages you to do is to use techniques from other fields -  humans have been exploring the world since forever, and there is heaps of historical experience and wisdom waiting to be applied to testing software. Also, if you agree that testing is about asking questions and gaining knowledge, it can help you with the project roles that others try to enforce on you. I found that this is a common occurrence, that the product manager (or whoever is responsible for the application) presses the testing team into answering the "is it ready to go production", and "when it will be ready" questions. The problem here is that usually testing team don't really have the power to influence the situation much: they can't decide what is acceptable to be in production, they can't assign additional time for testing or for development teams, and they can't rush developers, designers, translators, etc. into doing their jobs. And if you have no power, you can't take the responsibilities.

All these seemingly abstract ideas form a perfectly practical approach, that I've been more or less successfully applying on my daily job for the last 6 years, even before I knew the terminology to talk about it and to understand exactly what I'm doing. I call that approach exploratory testing. Let me share with you its main points in a series of statements.

1. The mission of software testing isn't to "provide quality", it's to gather and provide helpful information to those who make decisions about the product and it's quality.
2. Exploratory testing is an approach that doesn't depend on SDLC, it can be applied to any situation (even outside of testing itself).
3. You can never know for sure how much time will it take to finish the testing. The challenges of planning and performing testing are rooted in the "work in the unknown" part. It's just like scientific research in that regard.
4. Exploratory testing consists of many iterations of two very different steps: discover and investigate. Each step has it's own challenges and goals. On the "discover" step you are concentrated on finding the issue (issue being a problem or a question), and on the "investigate" step your goal is to gain information you need to deal with that issue.
5. It is important to ask questions (no such thing as a stupid question!), to look at problem from different angle from time to time, to perform root cause analysis, to use new tests instead of old ones, and to keep notes.
6. Use automation to help with mundane tasks and free your precious time for smart tasks - not to replace manual testing of everything.
7. Give feedback as soon as you can make it useful. Give clear feedback, and make sure it isn't personal or insulting.
8. Prioritize your work (which features to test first, how much time to spend on a problem, which risks to mitigate and which tasks to do first, etc.).
9. Know your tools well: heuristics, practices, diagram notations, software that can help on the job, etc.
10. Know software you are testing from business and technological points of view: what problems is it supposed to solve, who are the stakeholders, and how does it work (on a system level at least).
11. Don't spend time on documentation no one will read or use.
12. Keep documentation that is in use up-to-date. Don't let your experience and knowledge of the project to stay only in your head.
13. Use test ideas to guide your testing rather than test cases to define it.

There is one more thing I feel necessary to say out loud. Exploratory testing (the way I understand it) doesn't require you to be an experienced tester in order to be good at it. A completely new to a product and/or software testing itself person can learn exploratory testing, use it right away and be awesome. The only thing that is absolutely required is the passion to explore. Everything else will come with the answers to the question you ask. I tested that with few new testers way back by teaching them to do exploratory instead of the classical "do step-by-step test cases until you learn them by heart" approach, and it worked, they are now totally awesome.

That's all, folks. Once again, I don't claim to know what exploratory testing is really about, but just one way of looking at it. I'd like to thank all testers, whose blogs I read for the last 8 years, because I definitely learned a lot by doing that. And specifically, the biggest thank you goes to James Bach and Michael Bolton for their absolutely brilliant "Rapid Software testing" course, which gave me ideas to think about, confidence that I'm not going the wrong way and most of all terminology I use to vocalize my thoughts on testing.

Tuesday, 11 February 2014

Auckland WeTest meetup, Mobile testing

Today I was attending my favorite event for testers, WeTest Auckland - a meetup, hosted by Assurity, where testers and people interested in testing share their experience and discuss challenges that arise in testing. That was my third time, and I'm definitely coming again, because each time I go there I have so much fun, and I also get inspired, and learn stuff, and meet cool people who I would've never met otherwise... gosh, it sounds like I'm doing an advertisement because I was told to... but that's totally not the case, I'm genuinely enthusiastic about this thing. I love the atmosphere and the facilitated discussion format. I am quite a shy person actually, but on these meetups I feel comfortable talking about testing. More comfortable in fact than in my own company, where most of the time I don't feel that people understand what I'm trying to communicate, so I get nervous, and my English gets awful, and I end up just feeling stupid. Which probably happens because in my company we talk about products a lot (and they are crazy complicated as a whole ecosystem, and I don't really know them that well as of yet), whereas on meetups we talk about testing in general... aaaanyway, back to the meetup.

First ever WeTest was about automation, second one was about exploratory testing (with me presenting actually, which was a special brand of fun because of how enthusiastic and accepting the audience was), and today we talked about mobile testing. Morris Nye did an awesome job on presenting his experience report on mobile testing and automation in that area, and as always there were people asking questions and sharing their experience. Even though I spent most of the last 4 years doing mobile testing, I learned few things today. Mostly just the names of the tools (or rather the fact that they exist and the ways they behave), but still. And of course it was awesome to meet new people (which I'm usually terrified of) and hang out with those I knew already. Can't express how much I value this opportunity. Way to go, Assurity!

Hopefully from now on the WeTest Auckland meetup will become a monthly event (as opposed to bimonthly before now). :-)

Saturday, 8 February 2014

Automation in functional testing

When people hear the word "automation" near the word "testing" they usually think about automated testing, meaning "automatic execution of test scripts". But while automated testing is a big awesome area of interest, it's not the only way you can use automation to aid testing. I'm saying nothing new here, probably, that's just a topic that I find fascinating and practical. Soooo... what else can you do automatically to make your life easier if you are a software tester?

I shall probably start with a rude description of the stuff that a functional tester does more or less often in his or her daily job (this is based on my personal experience, of course), and then I can think of what of that it would make sense to automate.

As a tester I:
- analyze requirements for the product;
- create test strategy for the product and test design for separate features of the product;
- set up testing environment;
- prepare test data;
- test according to test strategy and test design;
- do exploratory testing outside of test design;
- do regression testing and bug fixes verification testing;
- keep notes during any kind of testing;
- report bugs;
- communicate with the team to resolve questions and keep up to date on what's going on on the project;
- report and analyse testing results.

Going top to bottom, I can see few obvious candidates for automation outside of automated testing:
1) setting up test environment
There are plenty tools that can help with that (especially if the environment is Unix/Linux based), from bash scripts to Puppet, to cloning VMs, to restoring database to a snapshot, to all-around continuous delivery systems that can do stuff all the way from building a new build from source code to deploying it on a newly provisioned environment, to running predefined set of automated tests and reporting on results.

At the very least setting up a test environment for long-living project should be documented and aided with prepared configuration files, so that it takes the minimum amount of time for anyone to pick up the instruction and create a fully operational test environment.

2) generating test data
This is especially important for performance testing, but functional testing can also benefit from having a way to automatically generate necessary amount and variety of test data instead of doing it manually each time. Or at least if test data has been generated manually once, it's a good idea to make a snapshot of that data in order to be able to return to that state (or recreate it on new environment) whenever needed.
Excel and similar tools are pretty cool for preparing randomized test data in csv files that can then be used by tools like JMeter, or SoapUI, or any scripting language to actually populate data storage. Sometimes it's easier to do it via emulation of user activity (e.g. with JMeter and HTTP requests), and sometimes it's easy enough to just go straight to the database and do some SQL scripting. If speed is the priority, I recommend to start from the lowest level with the less amount of processing logic and only go up if adding test data on that level is overly complicated. On the other hand, emulating user actions in order to add data has a side effect of testing those actions in the process.

3) generating/using templates for test documentation
This includes test strategy document, test design documents, bug reports and test run notes/sessions (or whatever you use to keep notes on the testing). All these documents have structure (or at least they are supposed to have it), so they would probably have some content that doesn't change from document to document. Prepopulate it to save time on actually typing that stuff! Also it can be useful to generate test runs/sessions descriptions from the test design documents automatically.

4) keeping notes during testing
It highly depends on the application you are testing, but sometimes it's useful to have a tool that sits on the background and records some information for you.
Some examples would be:
- automatic screenshot-maker set to take screenshots every 10 seconds or so;
- a program to record system events - I developed that about 6 years ago for testing few Windows applications, not sure if there are any tools or any wide demand for these tools;
- logs built in to the application. Logs are great and highly useful! Make sure your dev team is aware of that, and that they implemented few levels of logging (at least Info, Warning and Error).

5) generating documents from testing notes
If you keep testing notes in a specific format, they can be later automatically parsed and used for generating other documents: e.g. bug reports, reports on test results, templates for future testing sessions or list of test ideas.

6) analysing application logs
Sometimes it is necessary to monitor application and server logs during testing and to analyse them. When you are doing it for a single event, no reason not to use just tail -f, but if the monitoring is more or less regular task, it's worth while to set up some monitoring system to do most of the job for you. Splunk or Logstash-ElasticSearch-Kibana would do splendidly. They also allow to store and access historical data, so you can return to a point of time when you noticed fault or weird behavior and see what was going on in that exact moment. Isn't that cool?
Also in case of JMeter results you sometimes get results in a for that requires manual processing before they can be used. For example, aggregate report would contain every HTTP sampler, while you might only need few. It's easy enough to write a macro using regexp for Search/Replace in Notepad++ to do this for you in matter of seconds.

Less obvious would probably be:
1) aid in test design
For example, a macro that goes through product requirements and colors all verbs one color and all adjectives a different color. It's useful with some of test design techniques to have this colorization, but usually you do it manually.

2) script to go to a certain place in the application
It makes sense when you are testing a stateful application for example multypage registration applications with lots of fields, or a game where you need to do certain amount of actions to get to a certain location/level. If there is no backdoor available that leads directly to a point where you need to be (e.g. page 5 to test how fields on that page work together), you might lose time to get there. Or you might use automated test scripts to emulate getting there, and just do what you are really after - testing.

In the end I'd say that any repetetive action is a candidate for automation, if you don't absolutely need a human being to do it, and if it's repetetive enough. I.e. don't automate something if you are only gonna do it few times, and/or if automating it is a complicated time consuming task that just isn't worth it. In every other case - why not make your (and everyone else's) life easier?

(update from the comments):
3) tools to gather usage statistics in production
Example would be Sentry. I'm sure I also used a tool named "captain" for android applications, but I can't find it right now. Anyway, these tools can help to deal with client bugs, so you won't have to actually try and get steps for reproducing the issue froma client - you will have those steps gathered and reported automatically.

Friday, 7 February 2014

About ISTQB certification

ISTQB certification sucks on so many levels that I can't even bring myself to be nice about it.

JMeter non-GUI distributed testing

Since I'm now in performance testing, I've been using JMeter quite a lot in the recent months. I've also spent a lot of time on google after I read the whole JMeter reference guide. No reason not to share little things now that the problems are solved.

Distributed testing with JMeter is supposed to be a very easy task according to the official guide. Yet, for me it didn't work straight away. I blame it on the way we deployed the nodes (using puppet master that screwed up configuration at first), and you may also get this if you deploy JMeter by copy-pasting JMeter folder between nodes. Solution to all my problems turned out to be really simple, but maybe writing it down will save someone else time on googling.

My environment: 
- few CentOS 6.3 VMs, one of them arbitrary chosen to be JMeter master (others are slaves). No GUI access to any of the nodes;
- my local machine with GUI JMeter.

Steps to make it work and get results back

1) on each JMeter node modify $JMETER_HOME/bin/jmeter
- make sure heap size is configured as you want it
- if you want Visual JVM monitoring, make sure host is set to a proper IP address on each node

2) on each slave node modify jmeter-server
- make sure that RMI_DEF_HOST is explicitely set to that node's IP address

3) on JMeter master modify jmeter.properties file
- set remote_hosts to a list of all JMeter nodes except the master itself

4) on each slave node start/restart JMeter server. Either use service jmeter restart (if you installed JMeter server as a service) or kill current JMeter server thread, and then start it with $JMETER_HOME/bin/jmeter-server

5) if your testscript uses any external files, you have to put those files on each slave node into the same place, which is specified in testscript. From my experience it's safer to specify full path rather than relative (i.e. /etc/testscripts/in/userlist.csv instead of in/userlist.csv), because relative paths are resolved based on the folder from which you are executing jmeter, and not from the testscript location

6) in your testscript configure listeners to save results to files. Again, it is safer to use full paths instead of relative. JMeter will gather results from all slave nodes and save them on master node

7) place your testscript on JMeter master and run it with a command like this:
$JMETER_HOME/bin/jmeter -n -t <test script> -r
- additionally if you want to get some graphs, enable appropriate listener in your testscript and run jmeter with additional parameter -l <filename.jtl>. Note that this creates significant additional load on JMeter nodes, so it probably shouldn't be used when it is not absolutely necessary.
- additionally you can configure JMeter Summarizer to save and show aggregated responce times (one set of results for all samplers) during the run. This blog post does a good job describing how to use it. For me it wasn't of much use because I needed to get aggregated response times separately for each of the many samplers.

8) to see aggregated results, copy files that were generated by listeners from JMeter master to your local machine, run JMeter in GUI mode, open your testscript, choose appropriate listener and point to the file. JMeter will process the file and give you aggregate table as if you ran the script in GUI mode from the beginning. Same works for graph listeners - just point them to a generated *.jtl file.

9) oh, and if you need to stop the distributed test in non-GUI mode, use $JMETER_HOME/bin/shutdown.sh script on the JMeter Master - it will send "shutdown all" to slave nodes.

Hope that was helpful. I know I would be glad to find this when I first googled for "jmeter distributed testing get aggregate response times", or "jmeter master cannot connect to slaves", or "jmeter master doesn't get results back from slaves". :-)

Another attempt to keep a proper professional blog

And by "proper" I mean the one that is being regularly updated. Surely I tried this couple of times before, and failed miserably, but who knows... I'm different, my surroundings are different. I care less about impressing others and more about just talking my mind these days. So maybe this blog will survive.

I got this urge to start writing a tester's blog again when my article got edited to the state of being unrecognizable recently. Though I understand why the journal might want to have a certain format and to have similar language throughout all of its articles, it was still frustrating. Thing is I like to write, but I'm far from being good at it. Still, I have a recognisable voice and a distinct mindset which shows in everything I write. When it's being sterilized for the sake of improving overall quallity of publication, I (ironically) get frustrated. So I decided to just publish the very first version of the article I send to the Trapeza (new journal for testers in Australia and New Zealand) after they publish their first issue with the final after-lot-of-editing version of the same article. Thanks to the Katrina's (Trapeza's editor) patience, final version is okay by me, but I still like the original one better. So I'll publish it here after February 15.

Over the years I spent in IT I've been doing lots of different stuff. I manually tested all sorts of "general purpose" (meaning no military/scientific/etc.) applications, did some automation and automation frameworks, did a lot of test design, been a test lead for "testing as a service" team, even spent some time as a product manager. I worked in Gehtsoft and in Yandex, and now I'm a performance engineer in Orion. Most of the details on my job are protected by NDA agreements, which I respect, but there is still stuff that I want to talk about sometimes, and which is NDA-free. And I'm still learning so much all the time...

Despite all the time in the industry, I'm just a tester, you know? No one fancy. I don't like fancy, I don't like scripts and having to conform to some standards of behaviour. This is what you're supposed to behave like, this is how you're supposed to write, these are people you should be in awe with... I hate this shit. I'm a tester, and I'm a geek (and yes, blog name is a reference and a reverance), and I'm a lass who's been in IT for a long time - that's about it. To hell with standards!

I'm also not a native English speaker, so if anyone is reading this, bear with me.