Saturday 12 July 2014

#KWST4 day one

I just got back from the awesome two-days long testing workshop KWST4. It was a bit overwhelming for me, with the amount of intense thinking activity and socialization, and I'm still a bit out of it, but nonetheless I'll try to write down all the highlights while I still remember them.

There were at most 17 people in the room, KWST is a pretty small workshop, but that's likely what made it so good. Everyone was participating in discussions, and everyone got to share their experiences and pains. :-) We had four ERs (ER = experience report) on the first day, and three more ERs on the second day. There were also exercise, testing games and a lot of socialization on coffee/lunch breaks. The way ERs worked was someone presented their experience, and then we all had a facilitated discussion around that experience. Main topic of the workshop was "How to speed up testing, and why we shouldn't".

First ER was presented by Sean Cresswell, a test manager from Trademe. Sean shared his experience on how changing the way Risks based testing was perceived in the team allowed to speed up testing. Business unit was already doing what they called Risks analysis, but somehow in the end tests were still prioritized with specification coming first. That led to most important bugs being found closer to the end. Changing the way BU did Risks analysis and reordering tests allowed to change the perception of the testing. It might have taken about the same time as before, but now most critical bugs were found in the beginning of the testing, which gave the impression the testing itself sped up. And come on, we all know that finding serious bugs early is good for so many reasons besides the perception of testing. :-)

Some ideas that came out of this discussion, were:
  • Risks analysis should affect priorities in development as well as in testing (I think that was by Rachel Carson, but not sure).
  • Plan the work so that everyone (devs, BAs, testers) are busy all the time - as a planning strategy (Oliver Erlewein).
  • Instead of step by step howto guides it's sometimes useful to write down lists of questions, answers to which will lead a person through the howto (Andrew Robins).

There were also some pretty heated up discussions around definition of "risk" and such. I personally can recommend Rex Black's book on Risks based testing. Funny enough, when I twitted about the book and jokingly mentioned that it's good despite Rex's evilness (of being involved with ISTQB. ISTQB is evil), I found myself in a twitter argument around Rex's evilness. Huh. Sometimes life is weird.

Moving on, second ER was from Chris Rolls, and he talked about examples of successful and unsuccessful usage of test automation. One of the examples I found interesting was using automation to quickly do regression testing of a web app, after security patch was applied.
Chris also formulated a pretty cool (if you ask me) approach to test automation: when automating existing tests, first transfer test objective. He talked about the common perception of automated tests as step by step repetition of existing manual tests. It's easy to lose the "why" when thinking like this: why are we automating this test, and why do we test it this way. It might very well be that after transferring test objective first, you'll find you can reach that test objective in a different, more efficient way in automation.

Discussion went mostly about automation in general, and testing roles. I liked this idea from Oliver: he said he uses roles to protect people from being pulled away from their main tasks. Example he gave was one of his testers was responsible for developing a testing framework, and Oliver had to protect him from being used as a manual tester when there were lack of those. Apparently, naming the guy "test automation architect" or something like that makes a huge difference to the management. :-)

After the lunch break Thomas Recker presented his ER on coming to do the test automation late in the project. He talked about his experience of being called to "come and automate" when it was too late to influence test design or decisions around testability of the application. He was also asked both to do the testing and to fit some existing list of functionality coverage which didn't play well with how the tests worked. In the end he had to balance actual testing with the bureaucratic part and with the "make it stick together" parts of the job. I think we could all agree that coming into project early and having the opportunity to do test design and such with automation in mind helps with implementing test automation.

There were an interesting discussion around test automation strategies: when does it make sense to automate, and how do you choose what to automate. Few interesting for me things came out of that:
  • Andrew suggested to try and see the existing test as a measurement: if it can be seen as such, it's a good candidate to automate.
  • Till Neunast mentioned it makes sense to automate testing around business logic when it's implemented in files/modules that change often. No sense in having automated test that always succeeds because part of the application it checks never changes.
  • Aaron Hodder insists that you cannot automate testing, because "once it's automated, it's a different 'thing'".
  • Oliver described the system they have in his workplace: an internal twitter-like tool that is connected with the test automation framework. It works both ways: you can twit to the system to make it start some tests, and system constantly twits about what it's doing, and what are the results of the testing. Pretty cool!
  • Joshua Raine raised a question about differentiating between test automation that helps you here and now and test automation that requires investment now with a potential pay off in future, and about maybe finding some other types of test automation in the same classification.
In the end of the day Andrew Robins presented the ER number four. His topic was "Enabling the team as a technique to speed up testing". Andrew worked in a challenging environment where the company produced not just software, but hardware as well. That meant that test environment would consist of specially made prototypes, and would be extremely expensive. Previously (before Andrew did his magic) that led to a huge bottleneck where testers only had one environment, and it had to be reconfigured for different tests. Reconfiguration took three days. You can see how it's not good to make 40 people wait while the only existing test environment is being unreachable.
The way Andrew solved that and sped up testing significantly was to plan for testing in advance. Months before the testing was to begin, he started on the task to get more environments. Because there was enough time, the company was able to budget and plan for two test environment, which meant far less reconfiguration bottlenecks as well as opportunity to run different tests in parallel.

Important lesson here for me was to plan for testing as well as plan testing. To be fair we are doing it in OrionHealth, but to have it as a consciously formulated strategy is so much better than just do it intuitively!

One interesting part of the discussion that I managed to note down was about other ways of handling limited environment when there is no way to get a new one. The consensus was that one great technique is to stub/emulate parts of the environment you need. I might also add that it might be a good idea to challenge the "we cannot get another environment" notion and go into the cloud. Clouds allow to spin up additional environments pretty fast, secure and really cheap, so unless some specialized hardware is needed, no reason not to use it.

There was also plenty of stuff tracked on twitter, I do recommend to go through #KWST4 hashtag there. And of course discussions were pretty intense, so I didn't have a chance to note down/twit most of it, and probably left out a lot. It would be interesting to compare with other blog posts on the event. :-)

In the end of the day one we also did a group exercise: we had 10-15 minutes to come up with a common answer for each of the given 4 questions. I teamed up with Adam Howard and Nigel Charman. I like the answers our team gave, and I would totally hire anyone who demonstrates kind of thinking we demonstrated. But funny enough it turned out that part of anonymous test managers who assessed our answers called asking reasonable questions "being evasive" and were more concerned about the formatting and punctuation than about the actual content and what skills it showed. Well, what can I say... I wouldn't hire those test managers and wouldn't go work for them either. It also interestingly shows that hiring process in the testing industry is not at its top game in many companies.

Unrelated to the exercise, sadly enough some are still looking for drones instead of testers and wouldn't even consider a really good tester if he doesn't have ISTQB certification, or N years of experience with some tool. Luckily for us, there are also plenty of smart IT companies on the market who hire people to do the job, not to fit in a cell. :-)

Day 2 to follow.

No comments:

Post a Comment