Saturday, 8 February 2014

Automation in functional testing

When people hear the word "automation" near the word "testing" they usually think about automated testing, meaning "automatic execution of test scripts". But while automated testing is a big awesome area of interest, it's not the only way you can use automation to aid testing. I'm saying nothing new here, probably, that's just a topic that I find fascinating and practical. Soooo... what else can you do automatically to make your life easier if you are a software tester?

I shall probably start with a rude description of the stuff that a functional tester does more or less often in his or her daily job (this is based on my personal experience, of course), and then I can think of what of that it would make sense to automate.

As a tester I:
- analyze requirements for the product;
- create test strategy for the product and test design for separate features of the product;
- set up testing environment;
- prepare test data;
- test according to test strategy and test design;
- do exploratory testing outside of test design;
- do regression testing and bug fixes verification testing;
- keep notes during any kind of testing;
- report bugs;
- communicate with the team to resolve questions and keep up to date on what's going on on the project;
- report and analyse testing results.

Going top to bottom, I can see few obvious candidates for automation outside of automated testing:
1) setting up test environment
There are plenty tools that can help with that (especially if the environment is Unix/Linux based), from bash scripts to Puppet, to cloning VMs, to restoring database to a snapshot, to all-around continuous delivery systems that can do stuff all the way from building a new build from source code to deploying it on a newly provisioned environment, to running predefined set of automated tests and reporting on results.

At the very least setting up a test environment for long-living project should be documented and aided with prepared configuration files, so that it takes the minimum amount of time for anyone to pick up the instruction and create a fully operational test environment.

2) generating test data
This is especially important for performance testing, but functional testing can also benefit from having a way to automatically generate necessary amount and variety of test data instead of doing it manually each time. Or at least if test data has been generated manually once, it's a good idea to make a snapshot of that data in order to be able to return to that state (or recreate it on new environment) whenever needed.
Excel and similar tools are pretty cool for preparing randomized test data in csv files that can then be used by tools like JMeter, or SoapUI, or any scripting language to actually populate data storage. Sometimes it's easier to do it via emulation of user activity (e.g. with JMeter and HTTP requests), and sometimes it's easy enough to just go straight to the database and do some SQL scripting. If speed is the priority, I recommend to start from the lowest level with the less amount of processing logic and only go up if adding test data on that level is overly complicated. On the other hand, emulating user actions in order to add data has a side effect of testing those actions in the process.

3) generating/using templates for test documentation
This includes test strategy document, test design documents, bug reports and test run notes/sessions (or whatever you use to keep notes on the testing). All these documents have structure (or at least they are supposed to have it), so they would probably have some content that doesn't change from document to document. Prepopulate it to save time on actually typing that stuff! Also it can be useful to generate test runs/sessions descriptions from the test design documents automatically.

4) keeping notes during testing
It highly depends on the application you are testing, but sometimes it's useful to have a tool that sits on the background and records some information for you.
Some examples would be:
- automatic screenshot-maker set to take screenshots every 10 seconds or so;
- a program to record system events - I developed that about 6 years ago for testing few Windows applications, not sure if there are any tools or any wide demand for these tools;
- logs built in to the application. Logs are great and highly useful! Make sure your dev team is aware of that, and that they implemented few levels of logging (at least Info, Warning and Error).

5) generating documents from testing notes
If you keep testing notes in a specific format, they can be later automatically parsed and used for generating other documents: e.g. bug reports, reports on test results, templates for future testing sessions or list of test ideas.

6) analysing application logs
Sometimes it is necessary to monitor application and server logs during testing and to analyse them. When you are doing it for a single event, no reason not to use just tail -f, but if the monitoring is more or less regular task, it's worth while to set up some monitoring system to do most of the job for you. Splunk or Logstash-ElasticSearch-Kibana would do splendidly. They also allow to store and access historical data, so you can return to a point of time when you noticed fault or weird behavior and see what was going on in that exact moment. Isn't that cool?
Also in case of JMeter results you sometimes get results in a for that requires manual processing before they can be used. For example, aggregate report would contain every HTTP sampler, while you might only need few. It's easy enough to write a macro using regexp for Search/Replace in Notepad++ to do this for you in matter of seconds.

Less obvious would probably be:
1) aid in test design
For example, a macro that goes through product requirements and colors all verbs one color and all adjectives a different color. It's useful with some of test design techniques to have this colorization, but usually you do it manually.

2) script to go to a certain place in the application
It makes sense when you are testing a stateful application for example multypage registration applications with lots of fields, or a game where you need to do certain amount of actions to get to a certain location/level. If there is no backdoor available that leads directly to a point where you need to be (e.g. page 5 to test how fields on that page work together), you might lose time to get there. Or you might use automated test scripts to emulate getting there, and just do what you are really after - testing.

In the end I'd say that any repetetive action is a candidate for automation, if you don't absolutely need a human being to do it, and if it's repetetive enough. I.e. don't automate something if you are only gonna do it few times, and/or if automating it is a complicated time consuming task that just isn't worth it. In every other case - why not make your (and everyone else's) life easier?

(update from the comments):
3) tools to gather usage statistics in production
Example would be Sentry. I'm sure I also used a tool named "captain" for android applications, but I can't find it right now. Anyway, these tools can help to deal with client bugs, so you won't have to actually try and get steps for reproducing the issue froma client - you will have those steps gathered and reported automatically.

No comments:

Post a Comment