I talked about my experience of dealing with emergencies on my last job in Russia, where I was testing mobile applications for android - think "google maps" but better. I won't mention the name of the company not to attract searching bots and unnecessary attention to this blog post, but if you're interested, it's on my LinkedIn account. Now, this company is amazing in many ways, and it's got huge user auditory and a reputation to maintain. From time to time due to different circumstances I was in a situation when I get a new build, it goes into production in few hours, and full proper retest takes about a week. Bear in mind, I still had enough time to do the testing between the craziness, but the craziness still happend from time to time.
The way I dealt with it was using these three tools:
- Cluster (environments, contexts, test cases).
- Prioritize.
- Parallelize.
The secret is that you cannot possible use these tools if you are not ready. So the solution really is "Be prepared". This is what you need to be able to use those tools:
- Know your environments (what are they, how are they different, which differences matter and why, are there any special reasons to be doing testing on a particular environment).
- Know how your application is being used and how is it likely to be used after the release (popular workflows, past statistics, core users, geek users, marketing effort).
- Know high risk areas (bug-rich areas, understanding of the codebase, functions that make application useless or annoying to use when broken, what changed since last release and since last build)
- Easy to read documented coverage blocks - you should be able to give a coworker who doesn't know your application well a peace of paper and ask to test areas mentioned there. I like to use coverage matrix in a spreadsheet where each row is a short test idea e.g. "make sure the field doesn't let through special characters" or "try two users doing this concurrently".
- Know on a deep level how your application works (architecture in diagrams and mindmaps, dependencies between different parts, server-client protocols).
Few tips on how do you get there:
- Ask questions early - make sure you understand whats, hows and whys about every feature. Or at least whats and whys.
- Document knowledge in diagrams and bullet-point lists: structure!
- Do risks analysis with the team, and use results both in development and in testing.
- Keep up to date with sales/marketing. If they spent a month advertising some shiny new feature you wanna make sure that feature is spotless.
- Gather and analyze usage statistics - there are plenty of the libraries nowadays (google analytics and the likes) that allow you to do it. You wanna know how users are actually using your application, not just how you think they are using it.
- Work close with development - they are irreplaceable source of information. They can also back you up when you are explaining to higher powers why you absolutely need to test this feature, but can release without testing that one.
- Learn differences between environments.
I talked more, of course, but hopefully these lists make sense on their own. To give you an example of how this worked in real life, this is how it usually went. Instead of testing the full scope of the functionality on all representative devices in different contexts and conditions (on the move, in building, in different map setups, with network fluctuations, etc.) I would choose 3-5 devices that are most suitable for the final round (hardware differences, OS versions, popularity, known problems), I would choose a subset of tests, I would cluster my tests by the context. I would e.g. go and take one ride on a bus to the subway and back, carrying three devices with me and switching between them to do all the testing that needs to be done on the move. I would ask a coworker to do the easier part of the testing to increase the scope. I would only test functionality that had a chance to be affected by the new build (because I tested everything else in the previous build), and also core functionality. I would do part of the testing with stubs - e.g. on a test environment or using server simulators like Fiddler - because I understood the impact of that and how to emulate the server properly. I would make sure severe bugs that were fixed and tested few builds back did not come back (happens sometimes when the code is merged from one branch to another in Git).
And after I am done, I would write a letter to a PM (CCed to certain team members) where I would specify which areas are left untested, what are the risks involved, what is my recommendation (safe to release/need more time to test/need to fix known bugs) and why.
There was certainly an interesting discussion after I was done with my ER, but as I said, I don't remember it very well. This is what I do remember.
- Few people were worried that after seeing a project successfully going into production without going through full testing, PM would decide that full testing isn't actually necessary. We did not have this problem in my company, but I think it's a valid point. I can only suggest to educate your managers each release on what are the risks, and why they matter (e.g. what would be the consequences of the failure).
- I think it was Adam Howard who asked whether I used any of the crisis techniques in normally paced testing. I totally did, especially the clustering: I mean, once you figure out a way to do something efficiently, you can't really go back. Well, I am personally too lazy to go back from being efficient. :-) And also, once you went through the emergency, you learn to be ready to the next one, so it's a self-feeding circle, really.
- We talked a bit about how important is co-location, in particular for developers, testers and product owners. I also mentioned, and I think few people agreed, that it's also important to have the right to not be disturbed when you choose to. Sean also mentioned that in their company they dealt with teams being geographically distributed by setting up a constant video thread between locations through huge TVs on the wall.
Next ER was presented by Rachel Carson. She talked about speeding up testing (and the whole project) by switching from Waterfall to "Jet Waterfall" - kind of a Waterfall-Agile hybrid if I understood that correctly: cross-functional teams, co-location, more communication within the team. According to Rachel, testing activities didn't change much, they were pretty good to begin with. But the mindset of testers shifted to the more realistic side: now instead of reporting all the bugs, they started to think whether it's useful to report the bug. Sometimes it was better to talk to development and either get it fixed right away or learn that it will never be fixed with a reasonable explanation. They also started to use the Definition of Done rather than pure guts to drive the testing.
These are some of my notes from that ER and discussion afterwards:
- Important developer's skill - to be able to share information with non-technical people (Rachel).
- When you are forced into defending design decisions, be sure to stay critical about them (Rachel).
- When there are many existing bugs on a project, you can use them as a set of data rather than one-by-one, to extract useful info. E.g. areas where bugs are clustering might need more attention and/or redesign. (Adam and Chris).
The last ER on the KWST was done by Adam Howard. He talked about speeding up the whole process of development on a very challenging bound-to-fail project. Which thanks to Adam and his team did not fail in the end. :-)
This is what I got from it:
- They used visual models (mostly mindmaps) to see/show the big picture - that allowed the team to make a decision to rewrite some functionality rather than to try and fix numerous existing bugs there.
- Fixing all the bugs didn't work because the results of those fixes did't play well together - another reason to use visual models.
- Adam tried to switch testing team to the ways of exploratory testing. To make it easier on them, Adam didn't require them to switch - he just showed the way. Some teams accepted new way, some didn't.
- It's very challenging to change when you have to do it fast. As a result, after the project was out of the door, and Adam and his team went back to Assurity (it was a contract project), remaining testers went back to the old ways. It's not easy to make even good practices stick if they are new.
- Adam and his team partially took BA role on themselves in order to create the big picture view of the project. Some (e.g. Oliver and Katrina) argued that instead of doing it we should rather push BAs to do their job better.
And I believe that's about it. We discussed the exercises from the day before, played some games, said goodbyes and went our ways. It was a lot of fun, and I've learned from every single person who was there. Not everyone is quoted in my two posts (I think I might have missed Ben here, but mentioned him on twitter. Or vice versa), but everyone was brilliant.
I'd like to finish this with mentioning mazing people who made it happen. Thanks to Oliver, Rich, Janice, Nikki (I hope I spelled their names right) and to our sponsors: @AST_News @SoftEdMan and @AssurityNZ!
Viktoriia, Your brain works faster than mine I think, and retains more information.
ReplyDeleteI'm sorry I missed your ER, I've been hearing about it from others & only wish I'd been there.
Is there any chance you can discuss this with a few of us Auckland testers in person (eg, Nicola, Shirley, whoever you'd feel comfortable talking to)? It seems we have a few things to catch up on now, maybe we can set up a time over email :)
Wow, you've been hearing from others about it?! That's so cool, thanks for letting me know! :-)
DeleteSure, lets meet and talk sometime. I'll be comfortable with whatever setting as long as I don't have to be the only one responsible for organization. What do you have in mind? I'm out of country in August though, so it should be either before August 6, or in September... we can meet on a weekend, or after work. Let's take it to email! :-)
Regarding brain, thanks, but if you mean the notes, the secret lies in different area.))) I'm becoming quite a fan of taking notes in the last few years, so I just scribbled a lot, and then had 7 pages of notes to write these reports. Offtopic, but I've been also trying it out while taking courses on coursera.org, and it turned to be really helpful in recalling material, even the stuff I did not write down. I think it maybe provides some structure that makes it easier to restore events...