Thursday 8 May 2014

Lessons learned in non-functional testing

Yesterday I gave a lightning talk at my favorite WeTest Auckland meetup hosted by Assurity. It was a five minutes (okay, maybe 6 minutes) long presentation with the 15 minutes of discussion afterwards. So, for five minutes I had to make it really short, yet making sense.

The main idea I was trying to express is really simple: plan for non-functional quality characteristics right from the beginning.

As an IT graduate I had this assumption when I first started to work as a software tester, that developers would do proper technical and database design according to the task at hand, and then perform testing of the product from the very beginning. In other words, I thought it was obvious that someone would start testing as soon as you get your first requirements from a customer. In reality it wasn't like this at all. Testing usually started at the latest stages of development, which cost projects lots of work hours, effort and money, and often reputational losts. As years passed by, situation seemed to improved. Nowadays most IT companies accept that testing is important, and that it should begin as soon as possible. With one "but" - it only applies to functional testing.

When I switched from functional to performance testing last September, I didn't have that naive assumption about functional testing beeing done the right way everywhere... turns out, I still had even more naive assumption that non-functional testing is done the right way. I haven't done any serious programming in 7 years, yet when occasionally I think about implementing this or that, I still operate by the algorithm I learned in uni: I ask myself questions about a problem I'm trying to solve, and about the future of the software that would solve it. I thought it was obvious: when you write e.g. a web application, you think about bunch of stuff on top of functionality: how would it scale, how would you plug in new functionality, how would you localize it, how would you customize it, what questions will users ask about it, how would you test it, and so on. Sometimes most of the answers would be "it doesn't matter because..." - but important thing is to ask questions and know that for sure.

Well, in real life it happens rarely. I'm not talking about companies like Google or Amazon - surely those guys know what they are doing if only because they had enough experience in the area. But most companies aren't Google. What I see in real life is fire fighting: application is created, then it goes into production, and then you have problems in production, which are extremely expensive to fix. Some issues are obvious: e.g. security breaches and performance and/or scalability breakdowns. Some issues are more subtle: e.g. bad supportability would mean a lot of time and effort spent by field agents (support line, or operators, or whatever you call people who talk with unhappy users). Bad maintainability would mean additional problems when you need to install updates or reconfigure something in live system. Bad testability would mean more time necessary for testing e.g. new release, and potentialy worse quality because some areas testers won't be able to reach. Bad initial usability will bite you in the ass when you decide to implement shiny new GUI which would be different from the existing one, because users will be resistant to changes (remember MS Office 2003 -> 2007 outcry). And what's even worse, to fix non-functional issue often requires to implement fundamental architecture changes.

To summarize, non-functional quality is important, it costs a lot to fix, it isn't being given enough attention.
So, the question is, if you work in one of those companies, what can you do?

As part of the team, regardless of your role, you can communicate and underline the importance of the following:

  • When doing tech design for a new application or a new feature, think about the future. What would happen with your application in a year? What new functionality it might have? What would be the environment and the load (e.g. number of users per hour)? Are you gonna store any sensitive data? Would you need to create additional applications (e.g. mobile) to coordinate with it? What additional data you might need to store? How would you localize/customize it if necessary? How often would you need to release a new version?
  • Do risks analysis. Always do risks analysis - better be prepared for the trouble that might come your way.
  • Involve specialists in specific fields if your team lacks those: there are consultancies, that can review tech design, or do security testing, or performance testing, for you.
  • Start the actual testing as soon as possible - not only functional testing, but non-functional too. On early stages it would look differently from how it loos on later stages: e.g. it makes little to no sense to do a full performance testing for a half-made release, but it is possible to do performance testing of existing separate components, especially of the core ones. It pays off because the earlier you fix the issue, the cheaper it is.
  • Treat non-functional issues fair. If the team doesn't fix e.g. testability issues in time, you will pay for it later with much more complicated testing and higher price for building testability into the project. If performance or security issues are not being treated with respect because "it still works, no one will notice it's slow/insecure", you will pay with your reputation and money when it manifests on production. And if your product is a success, it will manifest in production sooner or later. #too_much_painful_experience
After I did the talk, one of the testers on the meetup asked if all of this makes sense for startups and such - she implied that in the beginning you don't want to spend your time thinking about all this stuff, because you are under pressure to deliver fast. This is another point of view, but I strongly disagree with it. The only case in which you can forget about quality is when you are making proof of concept, then throw it out the window and start anew. If you are gonna use the codebase of your first release further, it pays off greatly to spend few extra hours (and in the very beginning you would only need few extra hours) on analysing the future of your application and doing your tech design (database design and application design) properly.

Remember: there is nothing more permanent than a temporary solution.

Slides for the presentation are shared here:
https://docs.google.com/file/d/0Bxi4eMT3I97ea3luRU9nemRpODg/edit

3 comments:

  1. Agree with the guy asked for startups. There are software developing processes declaring: don't do more than you need to solve now. So, thinking about future performance issues is preliminary optimization :)
    Of cause such processes are not applied everywhere. And if you have releases and deadlines, you have to think about testing in this way.

    ReplyDelete
    Replies
    1. I have one word for you: FXTR. Started as single page registration so no one thought about future. And what a hideous monster did we get few years later!

      And the question was asked by a girl actually, not by a guy. XD

      Delete
  2. Hey,
    Thanks for sharing about non-functional testing. Keep posting!

    ReplyDelete